00:00:00.000 Started by upstream project "autotest-per-patch" build number 132779 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.131 The recommended git tool is: git 00:00:02.132 using credential 00000000-0000-0000-0000-000000000002 00:00:02.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.146 Fetching changes from the remote Git repository 00:00:02.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.163 Using shallow fetch with depth 1 00:00:02.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.164 > git --version # timeout=10 00:00:02.175 > git --version # 'git version 2.39.2' 00:00:02.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.749 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.762 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.773 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.773 > git config core.sparsecheckout # timeout=10 00:00:07.785 > git read-tree -mu HEAD # timeout=10 00:00:07.802 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.824 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.825 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.912 [Pipeline] Start of Pipeline 00:00:07.926 [Pipeline] library 00:00:07.928 Loading library shm_lib@master 00:00:07.928 Library shm_lib@master is cached. Copying from home. 00:00:07.946 [Pipeline] node 00:33:36.591 Resuming build at Mon Dec 09 09:44:43 UTC 2024 after Jenkins restart 00:33:36.596 Ready to run at Mon Dec 09 09:44:43 UTC 2024 00:33:46.969 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:33:46.974 [Pipeline] { 00:33:46.999 [Pipeline] catchError 00:33:47.002 [Pipeline] { 00:33:47.062 [Pipeline] wrap 00:33:47.083 [Pipeline] { 00:33:47.100 [Pipeline] stage 00:33:47.106 [Pipeline] { (Prologue) 00:33:47.188 [Pipeline] echo 00:33:47.192 Node: VM-host-SM16 00:33:47.222 [Pipeline] cleanWs 00:33:47.255 [WS-CLEANUP] Deleting project workspace... 00:33:47.255 [WS-CLEANUP] Deferred wipeout is used... 00:33:47.274 [WS-CLEANUP] done 00:33:48.097 [Pipeline] setCustomBuildProperty 00:33:48.380 [Pipeline] httpRequest 00:33:52.515 [Pipeline] echo 00:33:52.537 Sorcerer 10.211.164.101 is dead 00:33:52.551 [Pipeline] httpRequest 00:33:52.625 [Pipeline] echo 00:33:52.628 Sorcerer 10.211.164.96 is dead 00:33:52.639 [Pipeline] httpRequest 00:33:55.053 [Pipeline] echo 00:33:55.054 Sorcerer 10.211.164.20 is alive 00:33:55.063 [Pipeline] retry 00:33:55.065 [Pipeline] { 00:33:55.076 [Pipeline] httpRequest 00:33:55.080 HttpMethod: GET 00:33:55.081 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:55.082 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:55.085 Response Code: HTTP/1.1 200 OK 00:33:55.086 Success: Status code 200 is in the accepted range: 200,404 00:33:55.086 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:56.092 [Pipeline] } 00:33:56.105 [Pipeline] // retry 00:33:56.112 [Pipeline] sh 00:33:56.404 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:56.418 [Pipeline] httpRequest 00:33:56.779 [Pipeline] echo 00:33:56.780 Sorcerer 10.211.164.20 is alive 00:33:56.789 [Pipeline] retry 00:33:56.791 [Pipeline] { 00:33:56.804 [Pipeline] httpRequest 00:33:56.809 HttpMethod: GET 00:33:56.809 URL: http://10.211.164.20/packages/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:33:56.810 Sending request to url: http://10.211.164.20/packages/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:33:56.812 Response Code: HTTP/1.1 404 Not Found 00:33:56.813 Success: Status code 404 is in the accepted range: 200,404 00:33:56.813 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:33:56.817 [Pipeline] } 00:33:56.834 [Pipeline] // retry 00:33:56.839 [Pipeline] sh 00:33:57.123 + rm -f spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:33:57.137 [Pipeline] retry 00:33:57.139 [Pipeline] { 00:33:57.159 [Pipeline] checkout 00:33:57.169 The recommended git tool is: NONE 00:33:57.181 using credential 00000000-0000-0000-0000-000000000002 00:33:57.183 Wiping out workspace first. 00:33:57.192 Cloning the remote Git repository 00:33:57.195 Honoring refspec on initial clone 00:33:57.198 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:33:57.199 > git init /var/jenkins/workspace/nvme-vg-autotest/spdk # timeout=10 00:33:57.208 Using reference repository: /var/ci_repos/spdk_multi 00:33:57.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:33:57.208 > git --version # timeout=10 00:33:57.213 > git --version # 'git version 2.25.1' 00:33:57.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:33:57.217 Setting http proxy: proxy-dmz.intel.com:911 00:33:57.217 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/95/25495/5 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:34:46.212 Avoid second fetch 00:34:46.227 Checking out Revision b71c8b8dd8fbedef346b576febb9d3eedde82b3c (FETCH_HEAD) 00:34:46.515 Commit message: "env: explicitly set --legacy-mem flag in no hugepages mode" 00:34:46.191 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:34:46.196 > git config --add remote.origin.fetch refs/changes/95/25495/5 # timeout=10 00:34:46.200 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:34:46.214 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:34:46.221 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:34:46.229 > git config core.sparsecheckout # timeout=10 00:34:46.232 > git checkout -f b71c8b8dd8fbedef346b576febb9d3eedde82b3c # timeout=10 00:34:46.518 > git rev-list --no-walk a2f5e1c2d535934bced849d8b079523bc74c98f1 # timeout=10 00:34:46.540 > git remote # timeout=10 00:34:46.543 > git submodule init # timeout=10 00:34:46.595 > git submodule sync # timeout=10 00:34:46.644 > git config --get remote.origin.url # timeout=10 00:34:46.653 > git submodule init # timeout=10 00:34:46.700 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:34:46.705 > git config --get submodule.dpdk.url # timeout=10 00:34:46.713 > git remote # timeout=10 00:34:46.716 > git config --get remote.origin.url # timeout=10 00:34:46.719 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:34:46.723 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:34:46.726 > git remote # timeout=10 00:34:46.731 > git config --get remote.origin.url # timeout=10 00:34:46.735 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:34:46.738 > git config --get submodule.isa-l.url # timeout=10 00:34:46.742 > git remote # timeout=10 00:34:46.746 > git config --get remote.origin.url # timeout=10 00:34:46.749 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:34:46.753 > git config --get submodule.ocf.url # timeout=10 00:34:46.756 > git remote # timeout=10 00:34:46.760 > git config --get remote.origin.url # timeout=10 00:34:46.764 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:34:46.767 > git config --get submodule.libvfio-user.url # timeout=10 00:34:46.770 > git remote # timeout=10 00:34:46.773 > git config --get remote.origin.url # timeout=10 00:34:46.776 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:34:46.779 > git config --get submodule.xnvme.url # timeout=10 00:34:46.783 > git remote # timeout=10 00:34:46.787 > git config --get remote.origin.url # timeout=10 00:34:46.791 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:34:46.795 > git config --get submodule.isa-l-crypto.url # timeout=10 00:34:46.799 > git remote # timeout=10 00:34:46.804 > git config --get remote.origin.url # timeout=10 00:34:46.808 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:34:46.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.815 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.815 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:46.818 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.818 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.818 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:34:46.818 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:34:46.818 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.818 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.819 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:34:46.819 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.819 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:34:46.819 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.819 Setting http proxy: proxy-dmz.intel.com:911 00:34:46.819 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:34:46.819 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:34:46.819 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:35:14.100 [Pipeline] dir 00:35:14.101 Running in /var/jenkins/workspace/nvme-vg-autotest/spdk 00:35:14.103 [Pipeline] { 00:35:14.120 [Pipeline] sh 00:35:14.411 ++ nproc 00:35:14.411 + threads=88 00:35:14.411 + git repack -a -d --threads=88 00:35:19.693 + git submodule foreach git repack -a -d --threads=88 00:35:19.693 Entering 'dpdk' 00:35:22.987 Entering 'intel-ipsec-mb' 00:35:23.624 Entering 'isa-l' 00:35:23.885 Entering 'isa-l-crypto' 00:35:23.885 Entering 'libvfio-user' 00:35:24.145 Entering 'ocf' 00:35:24.714 Entering 'xnvme' 00:35:25.283 + find .git -type f -name alternates -print -delete 00:35:25.283 .git/modules/intel-ipsec-mb/objects/info/alternates 00:35:25.283 .git/modules/xnvme/objects/info/alternates 00:35:25.283 .git/modules/isa-l-crypto/objects/info/alternates 00:35:25.283 .git/modules/libvfio-user/objects/info/alternates 00:35:25.283 .git/modules/ocf/objects/info/alternates 00:35:25.283 .git/modules/dpdk/objects/info/alternates 00:35:25.283 .git/modules/isa-l/objects/info/alternates 00:35:25.283 .git/objects/info/alternates 00:35:25.293 [Pipeline] } 00:35:25.312 [Pipeline] // dir 00:35:25.318 [Pipeline] } 00:35:25.337 [Pipeline] // retry 00:35:25.346 [Pipeline] sh 00:35:25.630 + hash pigz 00:35:25.630 + tar -czf spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz spdk 00:35:40.538 [Pipeline] retry 00:35:40.540 [Pipeline] { 00:35:40.555 [Pipeline] httpRequest 00:35:40.562 HttpMethod: PUT 00:35:40.563 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:35:40.563 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:36:04.666 Response Code: HTTP/1.1 200 OK 00:36:04.678 Success: Status code 200 is in the accepted range: 200 00:36:04.681 [Pipeline] } 00:36:04.702 [Pipeline] // retry 00:36:04.711 [Pipeline] echo 00:36:04.714 00:36:04.714 Locking 00:36:04.714 Waited 18s for lock 00:36:04.714 File already exists: /storage/packages/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:36:04.714 00:36:04.719 [Pipeline] sh 00:36:05.005 + git -C spdk log --oneline -n5 00:36:05.005 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:36:05.005 496bfd677 env: match legacy mem mode config with DPDK 00:36:05.005 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:36:05.005 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:36:05.005 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:36:05.033 [Pipeline] writeFile 00:36:05.053 [Pipeline] sh 00:36:05.341 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:36:05.354 [Pipeline] sh 00:36:05.638 + cat autorun-spdk.conf 00:36:05.638 SPDK_RUN_FUNCTIONAL_TEST=1 00:36:05.638 SPDK_TEST_NVME=1 00:36:05.638 SPDK_TEST_FTL=1 00:36:05.638 SPDK_TEST_ISAL=1 00:36:05.638 SPDK_RUN_ASAN=1 00:36:05.638 SPDK_RUN_UBSAN=1 00:36:05.638 SPDK_TEST_XNVME=1 00:36:05.638 SPDK_TEST_NVME_FDP=1 00:36:05.638 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:36:05.646 RUN_NIGHTLY=0 00:36:05.648 [Pipeline] } 00:36:05.664 [Pipeline] // stage 00:36:05.684 [Pipeline] stage 00:36:05.687 [Pipeline] { (Run VM) 00:36:05.702 [Pipeline] sh 00:36:05.986 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:36:05.986 + echo 'Start stage prepare_nvme.sh' 00:36:05.986 Start stage prepare_nvme.sh 00:36:05.986 + [[ -n 7 ]] 00:36:05.986 + disk_prefix=ex7 00:36:05.986 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:36:05.986 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:36:05.986 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:36:05.986 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:36:05.986 ++ SPDK_TEST_NVME=1 00:36:05.986 ++ SPDK_TEST_FTL=1 00:36:05.986 ++ SPDK_TEST_ISAL=1 00:36:05.986 ++ SPDK_RUN_ASAN=1 00:36:05.986 ++ SPDK_RUN_UBSAN=1 00:36:05.986 ++ SPDK_TEST_XNVME=1 00:36:05.986 ++ SPDK_TEST_NVME_FDP=1 00:36:05.986 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:36:05.986 ++ RUN_NIGHTLY=0 00:36:05.986 + cd /var/jenkins/workspace/nvme-vg-autotest 00:36:05.986 + nvme_files=() 00:36:05.986 + declare -A nvme_files 00:36:05.986 + backend_dir=/var/lib/libvirt/images/backends 00:36:05.986 + nvme_files['nvme.img']=5G 00:36:05.986 + nvme_files['nvme-cmb.img']=5G 00:36:05.986 + nvme_files['nvme-multi0.img']=4G 00:36:05.986 + nvme_files['nvme-multi1.img']=4G 00:36:05.986 + nvme_files['nvme-multi2.img']=4G 00:36:05.986 + nvme_files['nvme-openstack.img']=8G 00:36:05.986 + nvme_files['nvme-zns.img']=5G 00:36:05.986 + (( SPDK_TEST_NVME_PMR == 1 )) 00:36:05.986 + (( SPDK_TEST_FTL == 1 )) 00:36:05.986 + nvme_files["nvme-ftl.img"]=6G 00:36:05.986 + (( SPDK_TEST_NVME_FDP == 1 )) 00:36:05.986 + nvme_files["nvme-fdp.img"]=1G 00:36:05.986 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:36:05.986 + for nvme in "${!nvme_files[@]}" 00:36:05.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:36:05.986 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:36:05.986 + for nvme in "${!nvme_files[@]}" 00:36:05.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:36:05.986 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:36:05.986 + for nvme in "${!nvme_files[@]}" 00:36:05.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:36:06.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:36:06.925 + for nvme in "${!nvme_files[@]}" 00:36:06.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:36:06.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:36:06.925 + for nvme in "${!nvme_files[@]}" 00:36:06.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:36:06.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:36:06.925 + for nvme in "${!nvme_files[@]}" 00:36:06.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:36:06.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:36:06.925 + for nvme in "${!nvme_files[@]}" 00:36:06.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:36:06.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:36:06.925 + for nvme in "${!nvme_files[@]}" 00:36:06.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:36:06.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:36:06.925 + for nvme in "${!nvme_files[@]}" 00:36:06.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:36:07.861 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:36:07.861 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:36:07.861 + echo 'End stage prepare_nvme.sh' 00:36:07.861 End stage prepare_nvme.sh 00:36:07.873 [Pipeline] sh 00:36:08.152 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:36:08.153 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:36:08.153 00:36:08.153 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:36:08.153 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:36:08.153 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:36:08.153 HELP=0 00:36:08.153 DRY_RUN=0 00:36:08.153 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:36:08.153 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:36:08.153 NVME_AUTO_CREATE=0 00:36:08.153 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:36:08.153 NVME_CMB=,,,, 00:36:08.153 NVME_PMR=,,,, 00:36:08.153 NVME_ZNS=,,,, 00:36:08.153 NVME_MS=true,,,, 00:36:08.153 NVME_FDP=,,,on, 00:36:08.153 SPDK_VAGRANT_DISTRO=fedora39 00:36:08.153 SPDK_VAGRANT_VMCPU=10 00:36:08.153 SPDK_VAGRANT_VMRAM=12288 00:36:08.153 SPDK_VAGRANT_PROVIDER=libvirt 00:36:08.153 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:36:08.153 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:36:08.153 SPDK_OPENSTACK_NETWORK=0 00:36:08.153 VAGRANT_PACKAGE_BOX=0 00:36:08.153 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:36:08.153 FORCE_DISTRO=true 00:36:08.153 VAGRANT_BOX_VERSION= 00:36:08.153 EXTRA_VAGRANTFILES= 00:36:08.153 NIC_MODEL=e1000 00:36:08.153 00:36:08.153 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:36:08.153 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:36:11.442 Bringing machine 'default' up with 'libvirt' provider... 00:36:12.377 ==> default: Creating image (snapshot of base box volume). 00:36:12.377 ==> default: Creating domain with the following settings... 00:36:12.378 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733737639_03fa90087127aed9652c 00:36:12.378 ==> default: -- Domain type: kvm 00:36:12.378 ==> default: -- Cpus: 10 00:36:12.378 ==> default: -- Feature: acpi 00:36:12.378 ==> default: -- Feature: apic 00:36:12.378 ==> default: -- Feature: pae 00:36:12.378 ==> default: -- Memory: 12288M 00:36:12.378 ==> default: -- Memory Backing: hugepages: 00:36:12.378 ==> default: -- Management MAC: 00:36:12.378 ==> default: -- Loader: 00:36:12.378 ==> default: -- Nvram: 00:36:12.378 ==> default: -- Base box: spdk/fedora39 00:36:12.378 ==> default: -- Storage pool: default 00:36:12.378 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733737639_03fa90087127aed9652c.img (20G) 00:36:12.378 ==> default: -- Volume Cache: default 00:36:12.378 ==> default: -- Kernel: 00:36:12.378 ==> default: -- Initrd: 00:36:12.378 ==> default: -- Graphics Type: vnc 00:36:12.378 ==> default: -- Graphics Port: -1 00:36:12.378 ==> default: -- Graphics IP: 127.0.0.1 00:36:12.378 ==> default: -- Graphics Password: Not defined 00:36:12.378 ==> default: -- Video Type: cirrus 00:36:12.378 ==> default: -- Video VRAM: 9216 00:36:12.378 ==> default: -- Sound Type: 00:36:12.378 ==> default: -- Keymap: en-us 00:36:12.378 ==> default: -- TPM Path: 00:36:12.378 ==> default: -- INPUT: type=mouse, bus=ps2 00:36:12.378 ==> default: -- Command line args: 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:36:12.378 ==> default: -> value=-drive, 00:36:12.378 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:36:12.378 ==> default: -> value=-drive, 00:36:12.378 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:36:12.378 ==> default: -> value=-drive, 00:36:12.378 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:36:12.378 ==> default: -> value=-drive, 00:36:12.378 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:36:12.378 ==> default: -> value=-drive, 00:36:12.378 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:36:12.378 ==> default: -> value=-drive, 00:36:12.378 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:36:12.378 ==> default: -> value=-device, 00:36:12.378 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:36:12.638 ==> default: Creating shared folders metadata... 00:36:12.638 ==> default: Starting domain. 00:36:14.544 ==> default: Waiting for domain to get an IP address... 00:36:32.635 ==> default: Waiting for SSH to become available... 00:36:32.635 ==> default: Configuring and enabling network interfaces... 00:36:36.829 default: SSH address: 192.168.121.115:22 00:36:36.829 default: SSH username: vagrant 00:36:36.829 default: SSH auth method: private key 00:36:38.735 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:36:46.864 ==> default: Mounting SSHFS shared folder... 00:36:48.324 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:36:48.324 ==> default: Checking Mount.. 00:36:49.261 ==> default: Folder Successfully Mounted! 00:36:49.261 ==> default: Running provisioner: file... 00:36:50.202 default: ~/.gitconfig => .gitconfig 00:36:50.461 00:36:50.461 SUCCESS! 00:36:50.461 00:36:50.461 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:36:50.461 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:36:50.461 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:36:50.461 00:36:50.472 [Pipeline] } 00:36:50.489 [Pipeline] // stage 00:36:50.500 [Pipeline] dir 00:36:50.501 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:36:50.503 [Pipeline] { 00:36:50.518 [Pipeline] catchError 00:36:50.520 [Pipeline] { 00:36:50.536 [Pipeline] sh 00:36:50.823 + vagrant ssh-config --host vagrant 00:36:50.823 + sed -ne /^Host/,$p 00:36:50.823 + tee ssh_conf 00:36:55.017 Host vagrant 00:36:55.017 HostName 192.168.121.115 00:36:55.017 User vagrant 00:36:55.017 Port 22 00:36:55.017 UserKnownHostsFile /dev/null 00:36:55.017 StrictHostKeyChecking no 00:36:55.017 PasswordAuthentication no 00:36:55.017 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:36:55.017 IdentitiesOnly yes 00:36:55.017 LogLevel FATAL 00:36:55.017 ForwardAgent yes 00:36:55.017 ForwardX11 yes 00:36:55.017 00:36:55.032 [Pipeline] withEnv 00:36:55.035 [Pipeline] { 00:36:55.050 [Pipeline] sh 00:36:55.333 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:36:55.334 source /etc/os-release 00:36:55.334 [[ -e /image.version ]] && img=$(< /image.version) 00:36:55.334 # Minimal, systemd-like check. 00:36:55.334 if [[ -e /.dockerenv ]]; then 00:36:55.334 # Clear garbage from the node's name: 00:36:55.334 # agt-er_autotest_547-896 -> autotest_547-896 00:36:55.334 # $HOSTNAME is the actual container id 00:36:55.334 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:36:55.334 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:36:55.334 # We can assume this is a mount from a host where container is running, 00:36:55.334 # so fetch its hostname to easily identify the target swarm worker. 00:36:55.334 container="$(< /etc/hostname) ($agent)" 00:36:55.334 else 00:36:55.334 # Fallback 00:36:55.334 container=$agent 00:36:55.334 fi 00:36:55.334 fi 00:36:55.334 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:36:55.334 00:36:55.607 [Pipeline] } 00:36:55.624 [Pipeline] // withEnv 00:36:55.634 [Pipeline] setCustomBuildProperty 00:36:55.652 [Pipeline] stage 00:36:55.654 [Pipeline] { (Tests) 00:36:55.674 [Pipeline] sh 00:36:55.956 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:36:56.226 [Pipeline] sh 00:36:56.507 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:36:56.781 [Pipeline] timeout 00:36:56.781 Timeout set to expire in 50 min 00:36:56.783 [Pipeline] { 00:36:56.798 [Pipeline] sh 00:36:57.078 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:36:57.645 HEAD is now at b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:36:57.657 [Pipeline] sh 00:36:57.934 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:36:58.220 [Pipeline] sh 00:36:58.494 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:36:58.765 [Pipeline] sh 00:36:59.043 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:36:59.301 ++ readlink -f spdk_repo 00:36:59.301 + DIR_ROOT=/home/vagrant/spdk_repo 00:36:59.301 + [[ -n /home/vagrant/spdk_repo ]] 00:36:59.301 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:36:59.301 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:36:59.301 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:36:59.301 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:36:59.301 + [[ -d /home/vagrant/spdk_repo/output ]] 00:36:59.301 + [[ nvme-vg-autotest == pkgdep-* ]] 00:36:59.301 + cd /home/vagrant/spdk_repo 00:36:59.301 + source /etc/os-release 00:36:59.301 ++ NAME='Fedora Linux' 00:36:59.301 ++ VERSION='39 (Cloud Edition)' 00:36:59.301 ++ ID=fedora 00:36:59.301 ++ VERSION_ID=39 00:36:59.301 ++ VERSION_CODENAME= 00:36:59.301 ++ PLATFORM_ID=platform:f39 00:36:59.301 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:36:59.301 ++ ANSI_COLOR='0;38;2;60;110;180' 00:36:59.301 ++ LOGO=fedora-logo-icon 00:36:59.301 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:36:59.301 ++ HOME_URL=https://fedoraproject.org/ 00:36:59.301 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:36:59.301 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:36:59.301 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:36:59.301 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:36:59.301 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:36:59.301 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:36:59.301 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:36:59.301 ++ SUPPORT_END=2024-11-12 00:36:59.301 ++ VARIANT='Cloud Edition' 00:36:59.301 ++ VARIANT_ID=cloud 00:36:59.301 + uname -a 00:36:59.301 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:36:59.301 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:36:59.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:59.817 Hugepages 00:36:59.817 node hugesize free / total 00:36:59.817 node0 1048576kB 0 / 0 00:36:59.817 node0 2048kB 0 / 0 00:36:59.817 00:36:59.817 Type BDF Vendor Device NUMA Driver Device Block devices 00:36:59.817 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:36:59.817 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:36:59.817 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:36:59.817 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:37:00.075 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:37:00.075 + rm -f /tmp/spdk-ld-path 00:37:00.075 + source autorun-spdk.conf 00:37:00.075 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:37:00.075 ++ SPDK_TEST_NVME=1 00:37:00.075 ++ SPDK_TEST_FTL=1 00:37:00.075 ++ SPDK_TEST_ISAL=1 00:37:00.075 ++ SPDK_RUN_ASAN=1 00:37:00.075 ++ SPDK_RUN_UBSAN=1 00:37:00.075 ++ SPDK_TEST_XNVME=1 00:37:00.075 ++ SPDK_TEST_NVME_FDP=1 00:37:00.075 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:37:00.075 ++ RUN_NIGHTLY=0 00:37:00.075 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:37:00.075 + [[ -n '' ]] 00:37:00.075 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:37:00.075 + for M in /var/spdk/build-*-manifest.txt 00:37:00.075 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:37:00.075 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:37:00.075 + for M in /var/spdk/build-*-manifest.txt 00:37:00.075 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:37:00.075 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:37:00.075 + for M in /var/spdk/build-*-manifest.txt 00:37:00.075 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:37:00.075 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:37:00.075 ++ uname 00:37:00.075 + [[ Linux == \L\i\n\u\x ]] 00:37:00.075 + sudo dmesg -T 00:37:00.075 + sudo dmesg --clear 00:37:00.075 + dmesg_pid=5408 00:37:00.075 + [[ Fedora Linux == FreeBSD ]] 00:37:00.075 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:00.075 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:00.075 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:37:00.075 + sudo dmesg -Tw 00:37:00.075 + [[ -x /usr/src/fio-static/fio ]] 00:37:00.075 + export FIO_BIN=/usr/src/fio-static/fio 00:37:00.075 + FIO_BIN=/usr/src/fio-static/fio 00:37:00.075 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:37:00.075 + [[ ! -v VFIO_QEMU_BIN ]] 00:37:00.075 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:37:00.075 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:37:00.075 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:37:00.075 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:37:00.075 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:37:00.075 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:37:00.075 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:37:00.075 09:48:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:37:00.075 09:48:07 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:37:00.076 09:48:07 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:37:00.076 09:48:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:37:00.076 09:48:07 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:37:00.335 09:48:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:37:00.335 09:48:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:00.335 09:48:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:37:00.335 09:48:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:00.335 09:48:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.335 09:48:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.335 09:48:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.335 09:48:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.335 09:48:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.335 09:48:07 -- paths/export.sh@5 -- $ export PATH 00:37:00.335 09:48:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.335 09:48:07 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:37:00.335 09:48:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:37:00.335 09:48:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733737687.XXXXXX 00:37:00.335 09:48:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733737687.uK1Yi2 00:37:00.335 09:48:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:37:00.335 09:48:07 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:37:00.335 09:48:07 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:37:00.335 09:48:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:37:00.335 09:48:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:37:00.335 09:48:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:37:00.335 09:48:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:37:00.335 09:48:07 -- common/autotest_common.sh@10 -- $ set +x 00:37:00.335 09:48:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:37:00.335 09:48:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:37:00.335 09:48:07 -- pm/common@17 -- $ local monitor 00:37:00.335 09:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:00.335 09:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:00.335 09:48:07 -- pm/common@21 -- $ date +%s 00:37:00.335 09:48:07 -- pm/common@25 -- $ sleep 1 00:37:00.335 09:48:07 -- pm/common@21 -- $ date +%s 00:37:00.335 09:48:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733737687 00:37:00.335 09:48:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733737687 00:37:00.335 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733737687_collect-cpu-load.pm.log 00:37:00.335 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733737687_collect-vmstat.pm.log 00:37:01.271 09:48:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:37:01.271 09:48:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:37:01.271 09:48:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:37:01.271 09:48:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:37:01.271 09:48:08 -- spdk/autobuild.sh@16 -- $ date -u 00:37:01.271 Mon Dec 9 09:48:08 AM UTC 2024 00:37:01.271 09:48:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:37:01.271 v25.01-pre-313-gb71c8b8dd 00:37:01.271 09:48:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:37:01.271 09:48:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:37:01.271 09:48:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:37:01.271 09:48:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:37:01.271 09:48:08 -- common/autotest_common.sh@10 -- $ set +x 00:37:01.271 ************************************ 00:37:01.271 START TEST asan 00:37:01.271 ************************************ 00:37:01.271 using asan 00:37:01.271 09:48:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:37:01.271 00:37:01.271 real 0m0.000s 00:37:01.271 user 0m0.000s 00:37:01.271 sys 0m0.000s 00:37:01.271 09:48:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:37:01.271 ************************************ 00:37:01.271 END TEST asan 00:37:01.271 09:48:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:37:01.271 ************************************ 00:37:01.271 09:48:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:37:01.271 09:48:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:37:01.271 09:48:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:37:01.271 09:48:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:37:01.271 09:48:08 -- common/autotest_common.sh@10 -- $ set +x 00:37:01.271 ************************************ 00:37:01.271 START TEST ubsan 00:37:01.271 ************************************ 00:37:01.271 using ubsan 00:37:01.271 09:48:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:37:01.271 00:37:01.271 real 0m0.000s 00:37:01.271 user 0m0.000s 00:37:01.271 sys 0m0.000s 00:37:01.271 09:48:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:37:01.271 ************************************ 00:37:01.271 END TEST ubsan 00:37:01.271 ************************************ 00:37:01.271 09:48:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:37:01.271 09:48:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:37:01.271 09:48:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:37:01.271 09:48:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:37:01.271 09:48:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:37:01.271 09:48:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:37:01.271 09:48:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:37:01.271 09:48:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:37:01.271 09:48:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:37:01.271 09:48:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:37:01.530 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:01.530 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:37:01.788 Using 'verbs' RDMA provider 00:37:15.363 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:37:27.561 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:37:27.561 Creating mk/config.mk...done. 00:37:27.561 Creating mk/cc.flags.mk...done. 00:37:27.561 Type 'make' to build. 00:37:27.561 09:48:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:37:27.561 09:48:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:37:27.561 09:48:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:37:27.561 09:48:33 -- common/autotest_common.sh@10 -- $ set +x 00:37:27.561 ************************************ 00:37:27.561 START TEST make 00:37:27.561 ************************************ 00:37:27.561 09:48:33 make -- common/autotest_common.sh@1129 -- $ make -j10 00:37:27.561 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:37:27.561 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:37:27.561 meson setup builddir \ 00:37:27.561 -Dwith-libaio=enabled \ 00:37:27.561 -Dwith-liburing=enabled \ 00:37:27.561 -Dwith-libvfn=disabled \ 00:37:27.561 -Dwith-spdk=disabled \ 00:37:27.561 -Dexamples=false \ 00:37:27.561 -Dtests=false \ 00:37:27.561 -Dtools=false && \ 00:37:27.561 meson compile -C builddir && \ 00:37:27.561 cd -) 00:37:27.561 make[1]: Nothing to be done for 'all'. 00:37:30.866 The Meson build system 00:37:30.866 Version: 1.5.0 00:37:30.866 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:37:30.866 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:37:30.866 Build type: native build 00:37:30.866 Project name: xnvme 00:37:30.866 Project version: 0.7.5 00:37:30.866 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:37:30.866 C linker for the host machine: cc ld.bfd 2.40-14 00:37:30.866 Host machine cpu family: x86_64 00:37:30.866 Host machine cpu: x86_64 00:37:30.866 Message: host_machine.system: linux 00:37:30.866 Compiler for C supports arguments -Wno-missing-braces: YES 00:37:30.866 Compiler for C supports arguments -Wno-cast-function-type: YES 00:37:30.866 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:37:30.866 Run-time dependency threads found: YES 00:37:30.866 Has header "setupapi.h" : NO 00:37:30.866 Has header "linux/blkzoned.h" : YES 00:37:30.866 Has header "linux/blkzoned.h" : YES (cached) 00:37:30.866 Has header "libaio.h" : YES 00:37:30.866 Library aio found: YES 00:37:30.866 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:37:30.866 Run-time dependency liburing found: YES 2.2 00:37:30.866 Dependency libvfn skipped: feature with-libvfn disabled 00:37:30.866 Found CMake: /usr/bin/cmake (3.27.7) 00:37:30.866 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:37:30.866 Subproject spdk : skipped: feature with-spdk disabled 00:37:30.866 Run-time dependency appleframeworks found: NO (tried framework) 00:37:30.866 Run-time dependency appleframeworks found: NO (tried framework) 00:37:30.866 Library rt found: YES 00:37:30.866 Checking for function "clock_gettime" with dependency -lrt: YES 00:37:30.866 Configuring xnvme_config.h using configuration 00:37:30.866 Configuring xnvme.spec using configuration 00:37:30.866 Run-time dependency bash-completion found: YES 2.11 00:37:30.866 Message: Bash-completions: /usr/share/bash-completion/completions 00:37:30.866 Program cp found: YES (/usr/bin/cp) 00:37:30.866 Build targets in project: 3 00:37:30.866 00:37:30.866 xnvme 0.7.5 00:37:30.866 00:37:30.866 Subprojects 00:37:30.866 spdk : NO Feature 'with-spdk' disabled 00:37:30.866 00:37:30.866 User defined options 00:37:30.866 examples : false 00:37:30.866 tests : false 00:37:30.866 tools : false 00:37:30.866 with-libaio : enabled 00:37:30.866 with-liburing: enabled 00:37:30.866 with-libvfn : disabled 00:37:30.866 with-spdk : disabled 00:37:30.866 00:37:30.866 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:37:31.801 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:37:31.801 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:37:31.801 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:37:31.801 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:37:31.801 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:37:31.801 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:37:31.801 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:37:31.801 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:37:31.801 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:37:32.059 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:37:32.059 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:37:32.059 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:37:32.059 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:37:32.059 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:37:32.059 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:37:32.059 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:37:32.059 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:37:32.059 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:37:32.318 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:37:32.318 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:37:32.318 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:37:32.318 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:37:32.318 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:37:32.318 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:37:32.318 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:37:32.318 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:37:32.318 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:37:32.318 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:37:32.318 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:37:32.318 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:37:32.318 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:37:32.318 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:37:32.318 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:37:32.318 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:37:32.318 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:37:32.318 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:37:32.318 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:37:32.576 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:37:32.576 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:37:32.576 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:37:32.576 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:37:32.576 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:37:32.576 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:37:32.576 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:37:32.576 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:37:32.576 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:37:32.576 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:37:32.576 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:37:32.576 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:37:32.576 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:37:32.576 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:37:32.576 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:37:32.576 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:37:32.576 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:37:32.835 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:37:32.835 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:37:32.835 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:37:32.835 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:37:32.835 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:37:32.835 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:37:32.835 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:37:32.835 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:37:33.093 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:37:33.093 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:37:33.093 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:37:33.093 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:37:33.093 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:37:33.093 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:37:33.093 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:37:33.093 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:37:33.352 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:37:33.352 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:37:33.352 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:37:33.352 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:37:33.610 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:37:33.610 [75/76] Linking static target lib/libxnvme.a 00:37:33.610 [76/76] Linking target lib/libxnvme.so.0.7.5 00:37:33.610 INFO: autodetecting backend as ninja 00:37:33.610 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:37:33.610 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:37:48.887 The Meson build system 00:37:48.887 Version: 1.5.0 00:37:48.887 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:37:48.887 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:37:48.887 Build type: native build 00:37:48.887 Program cat found: YES (/usr/bin/cat) 00:37:48.887 Project name: DPDK 00:37:48.887 Project version: 24.03.0 00:37:48.887 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:37:48.888 C linker for the host machine: cc ld.bfd 2.40-14 00:37:48.888 Host machine cpu family: x86_64 00:37:48.888 Host machine cpu: x86_64 00:37:48.888 Message: ## Building in Developer Mode ## 00:37:48.888 Program pkg-config found: YES (/usr/bin/pkg-config) 00:37:48.888 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:37:48.888 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:37:48.888 Program python3 found: YES (/usr/bin/python3) 00:37:48.888 Program cat found: YES (/usr/bin/cat) 00:37:48.888 Compiler for C supports arguments -march=native: YES 00:37:48.888 Checking for size of "void *" : 8 00:37:48.888 Checking for size of "void *" : 8 (cached) 00:37:48.888 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:37:48.888 Library m found: YES 00:37:48.888 Library numa found: YES 00:37:48.888 Has header "numaif.h" : YES 00:37:48.888 Library fdt found: NO 00:37:48.888 Library execinfo found: NO 00:37:48.888 Has header "execinfo.h" : YES 00:37:48.888 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:37:48.888 Run-time dependency libarchive found: NO (tried pkgconfig) 00:37:48.888 Run-time dependency libbsd found: NO (tried pkgconfig) 00:37:48.888 Run-time dependency jansson found: NO (tried pkgconfig) 00:37:48.888 Run-time dependency openssl found: YES 3.1.1 00:37:48.888 Run-time dependency libpcap found: YES 1.10.4 00:37:48.888 Has header "pcap.h" with dependency libpcap: YES 00:37:48.888 Compiler for C supports arguments -Wcast-qual: YES 00:37:48.888 Compiler for C supports arguments -Wdeprecated: YES 00:37:48.888 Compiler for C supports arguments -Wformat: YES 00:37:48.888 Compiler for C supports arguments -Wformat-nonliteral: NO 00:37:48.888 Compiler for C supports arguments -Wformat-security: NO 00:37:48.888 Compiler for C supports arguments -Wmissing-declarations: YES 00:37:48.888 Compiler for C supports arguments -Wmissing-prototypes: YES 00:37:48.888 Compiler for C supports arguments -Wnested-externs: YES 00:37:48.888 Compiler for C supports arguments -Wold-style-definition: YES 00:37:48.888 Compiler for C supports arguments -Wpointer-arith: YES 00:37:48.888 Compiler for C supports arguments -Wsign-compare: YES 00:37:48.888 Compiler for C supports arguments -Wstrict-prototypes: YES 00:37:48.888 Compiler for C supports arguments -Wundef: YES 00:37:48.888 Compiler for C supports arguments -Wwrite-strings: YES 00:37:48.888 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:37:48.888 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:37:48.888 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:37:48.888 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:37:48.888 Program objdump found: YES (/usr/bin/objdump) 00:37:48.888 Compiler for C supports arguments -mavx512f: YES 00:37:48.888 Checking if "AVX512 checking" compiles: YES 00:37:48.888 Fetching value of define "__SSE4_2__" : 1 00:37:48.888 Fetching value of define "__AES__" : 1 00:37:48.888 Fetching value of define "__AVX__" : 1 00:37:48.888 Fetching value of define "__AVX2__" : 1 00:37:48.888 Fetching value of define "__AVX512BW__" : (undefined) 00:37:48.888 Fetching value of define "__AVX512CD__" : (undefined) 00:37:48.888 Fetching value of define "__AVX512DQ__" : (undefined) 00:37:48.888 Fetching value of define "__AVX512F__" : (undefined) 00:37:48.888 Fetching value of define "__AVX512VL__" : (undefined) 00:37:48.888 Fetching value of define "__PCLMUL__" : 1 00:37:48.888 Fetching value of define "__RDRND__" : 1 00:37:48.888 Fetching value of define "__RDSEED__" : 1 00:37:48.888 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:37:48.888 Fetching value of define "__znver1__" : (undefined) 00:37:48.888 Fetching value of define "__znver2__" : (undefined) 00:37:48.888 Fetching value of define "__znver3__" : (undefined) 00:37:48.888 Fetching value of define "__znver4__" : (undefined) 00:37:48.888 Library asan found: YES 00:37:48.888 Compiler for C supports arguments -Wno-format-truncation: YES 00:37:48.888 Message: lib/log: Defining dependency "log" 00:37:48.888 Message: lib/kvargs: Defining dependency "kvargs" 00:37:48.888 Message: lib/telemetry: Defining dependency "telemetry" 00:37:48.888 Library rt found: YES 00:37:48.888 Checking for function "getentropy" : NO 00:37:48.888 Message: lib/eal: Defining dependency "eal" 00:37:48.888 Message: lib/ring: Defining dependency "ring" 00:37:48.888 Message: lib/rcu: Defining dependency "rcu" 00:37:48.888 Message: lib/mempool: Defining dependency "mempool" 00:37:48.888 Message: lib/mbuf: Defining dependency "mbuf" 00:37:48.888 Fetching value of define "__PCLMUL__" : 1 (cached) 00:37:48.888 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:37:48.888 Compiler for C supports arguments -mpclmul: YES 00:37:48.888 Compiler for C supports arguments -maes: YES 00:37:48.888 Compiler for C supports arguments -mavx512f: YES (cached) 00:37:48.888 Compiler for C supports arguments -mavx512bw: YES 00:37:48.888 Compiler for C supports arguments -mavx512dq: YES 00:37:48.888 Compiler for C supports arguments -mavx512vl: YES 00:37:48.888 Compiler for C supports arguments -mvpclmulqdq: YES 00:37:48.888 Compiler for C supports arguments -mavx2: YES 00:37:48.888 Compiler for C supports arguments -mavx: YES 00:37:48.888 Message: lib/net: Defining dependency "net" 00:37:48.888 Message: lib/meter: Defining dependency "meter" 00:37:48.888 Message: lib/ethdev: Defining dependency "ethdev" 00:37:48.888 Message: lib/pci: Defining dependency "pci" 00:37:48.888 Message: lib/cmdline: Defining dependency "cmdline" 00:37:48.888 Message: lib/hash: Defining dependency "hash" 00:37:48.888 Message: lib/timer: Defining dependency "timer" 00:37:48.888 Message: lib/compressdev: Defining dependency "compressdev" 00:37:48.888 Message: lib/cryptodev: Defining dependency "cryptodev" 00:37:48.888 Message: lib/dmadev: Defining dependency "dmadev" 00:37:48.888 Compiler for C supports arguments -Wno-cast-qual: YES 00:37:48.888 Message: lib/power: Defining dependency "power" 00:37:48.888 Message: lib/reorder: Defining dependency "reorder" 00:37:48.888 Message: lib/security: Defining dependency "security" 00:37:48.888 Has header "linux/userfaultfd.h" : YES 00:37:48.888 Has header "linux/vduse.h" : YES 00:37:48.888 Message: lib/vhost: Defining dependency "vhost" 00:37:48.888 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:37:48.888 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:37:48.888 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:37:48.888 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:37:48.888 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:37:48.888 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:37:48.888 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:37:48.888 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:37:48.888 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:37:48.888 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:37:48.888 Program doxygen found: YES (/usr/local/bin/doxygen) 00:37:48.888 Configuring doxy-api-html.conf using configuration 00:37:48.888 Configuring doxy-api-man.conf using configuration 00:37:48.888 Program mandb found: YES (/usr/bin/mandb) 00:37:48.888 Program sphinx-build found: NO 00:37:48.888 Configuring rte_build_config.h using configuration 00:37:48.888 Message: 00:37:48.888 ================= 00:37:48.888 Applications Enabled 00:37:48.888 ================= 00:37:48.888 00:37:48.888 apps: 00:37:48.888 00:37:48.888 00:37:48.888 Message: 00:37:48.888 ================= 00:37:48.888 Libraries Enabled 00:37:48.888 ================= 00:37:48.888 00:37:48.888 libs: 00:37:48.888 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:37:48.888 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:37:48.888 cryptodev, dmadev, power, reorder, security, vhost, 00:37:48.888 00:37:48.888 Message: 00:37:48.888 =============== 00:37:48.888 Drivers Enabled 00:37:48.888 =============== 00:37:48.888 00:37:48.888 common: 00:37:48.888 00:37:48.888 bus: 00:37:48.888 pci, vdev, 00:37:48.888 mempool: 00:37:48.888 ring, 00:37:48.888 dma: 00:37:48.888 00:37:48.888 net: 00:37:48.888 00:37:48.888 crypto: 00:37:48.888 00:37:48.888 compress: 00:37:48.888 00:37:48.888 vdpa: 00:37:48.888 00:37:48.888 00:37:48.888 Message: 00:37:48.888 ================= 00:37:48.889 Content Skipped 00:37:48.889 ================= 00:37:48.889 00:37:48.889 apps: 00:37:48.889 dumpcap: explicitly disabled via build config 00:37:48.889 graph: explicitly disabled via build config 00:37:48.889 pdump: explicitly disabled via build config 00:37:48.889 proc-info: explicitly disabled via build config 00:37:48.889 test-acl: explicitly disabled via build config 00:37:48.889 test-bbdev: explicitly disabled via build config 00:37:48.889 test-cmdline: explicitly disabled via build config 00:37:48.889 test-compress-perf: explicitly disabled via build config 00:37:48.889 test-crypto-perf: explicitly disabled via build config 00:37:48.889 test-dma-perf: explicitly disabled via build config 00:37:48.889 test-eventdev: explicitly disabled via build config 00:37:48.889 test-fib: explicitly disabled via build config 00:37:48.889 test-flow-perf: explicitly disabled via build config 00:37:48.889 test-gpudev: explicitly disabled via build config 00:37:48.889 test-mldev: explicitly disabled via build config 00:37:48.889 test-pipeline: explicitly disabled via build config 00:37:48.889 test-pmd: explicitly disabled via build config 00:37:48.889 test-regex: explicitly disabled via build config 00:37:48.889 test-sad: explicitly disabled via build config 00:37:48.889 test-security-perf: explicitly disabled via build config 00:37:48.889 00:37:48.889 libs: 00:37:48.889 argparse: explicitly disabled via build config 00:37:48.889 metrics: explicitly disabled via build config 00:37:48.889 acl: explicitly disabled via build config 00:37:48.889 bbdev: explicitly disabled via build config 00:37:48.889 bitratestats: explicitly disabled via build config 00:37:48.889 bpf: explicitly disabled via build config 00:37:48.889 cfgfile: explicitly disabled via build config 00:37:48.889 distributor: explicitly disabled via build config 00:37:48.889 efd: explicitly disabled via build config 00:37:48.889 eventdev: explicitly disabled via build config 00:37:48.889 dispatcher: explicitly disabled via build config 00:37:48.889 gpudev: explicitly disabled via build config 00:37:48.889 gro: explicitly disabled via build config 00:37:48.889 gso: explicitly disabled via build config 00:37:48.889 ip_frag: explicitly disabled via build config 00:37:48.889 jobstats: explicitly disabled via build config 00:37:48.889 latencystats: explicitly disabled via build config 00:37:48.889 lpm: explicitly disabled via build config 00:37:48.889 member: explicitly disabled via build config 00:37:48.889 pcapng: explicitly disabled via build config 00:37:48.889 rawdev: explicitly disabled via build config 00:37:48.889 regexdev: explicitly disabled via build config 00:37:48.889 mldev: explicitly disabled via build config 00:37:48.889 rib: explicitly disabled via build config 00:37:48.889 sched: explicitly disabled via build config 00:37:48.889 stack: explicitly disabled via build config 00:37:48.889 ipsec: explicitly disabled via build config 00:37:48.889 pdcp: explicitly disabled via build config 00:37:48.889 fib: explicitly disabled via build config 00:37:48.889 port: explicitly disabled via build config 00:37:48.889 pdump: explicitly disabled via build config 00:37:48.889 table: explicitly disabled via build config 00:37:48.889 pipeline: explicitly disabled via build config 00:37:48.889 graph: explicitly disabled via build config 00:37:48.889 node: explicitly disabled via build config 00:37:48.889 00:37:48.889 drivers: 00:37:48.889 common/cpt: not in enabled drivers build config 00:37:48.889 common/dpaax: not in enabled drivers build config 00:37:48.889 common/iavf: not in enabled drivers build config 00:37:48.889 common/idpf: not in enabled drivers build config 00:37:48.889 common/ionic: not in enabled drivers build config 00:37:48.889 common/mvep: not in enabled drivers build config 00:37:48.889 common/octeontx: not in enabled drivers build config 00:37:48.889 bus/auxiliary: not in enabled drivers build config 00:37:48.889 bus/cdx: not in enabled drivers build config 00:37:48.889 bus/dpaa: not in enabled drivers build config 00:37:48.889 bus/fslmc: not in enabled drivers build config 00:37:48.889 bus/ifpga: not in enabled drivers build config 00:37:48.889 bus/platform: not in enabled drivers build config 00:37:48.889 bus/uacce: not in enabled drivers build config 00:37:48.889 bus/vmbus: not in enabled drivers build config 00:37:48.889 common/cnxk: not in enabled drivers build config 00:37:48.889 common/mlx5: not in enabled drivers build config 00:37:48.889 common/nfp: not in enabled drivers build config 00:37:48.889 common/nitrox: not in enabled drivers build config 00:37:48.889 common/qat: not in enabled drivers build config 00:37:48.889 common/sfc_efx: not in enabled drivers build config 00:37:48.889 mempool/bucket: not in enabled drivers build config 00:37:48.889 mempool/cnxk: not in enabled drivers build config 00:37:48.889 mempool/dpaa: not in enabled drivers build config 00:37:48.889 mempool/dpaa2: not in enabled drivers build config 00:37:48.889 mempool/octeontx: not in enabled drivers build config 00:37:48.889 mempool/stack: not in enabled drivers build config 00:37:48.889 dma/cnxk: not in enabled drivers build config 00:37:48.889 dma/dpaa: not in enabled drivers build config 00:37:48.889 dma/dpaa2: not in enabled drivers build config 00:37:48.889 dma/hisilicon: not in enabled drivers build config 00:37:48.889 dma/idxd: not in enabled drivers build config 00:37:48.889 dma/ioat: not in enabled drivers build config 00:37:48.889 dma/skeleton: not in enabled drivers build config 00:37:48.889 net/af_packet: not in enabled drivers build config 00:37:48.889 net/af_xdp: not in enabled drivers build config 00:37:48.889 net/ark: not in enabled drivers build config 00:37:48.889 net/atlantic: not in enabled drivers build config 00:37:48.889 net/avp: not in enabled drivers build config 00:37:48.889 net/axgbe: not in enabled drivers build config 00:37:48.889 net/bnx2x: not in enabled drivers build config 00:37:48.889 net/bnxt: not in enabled drivers build config 00:37:48.889 net/bonding: not in enabled drivers build config 00:37:48.889 net/cnxk: not in enabled drivers build config 00:37:48.889 net/cpfl: not in enabled drivers build config 00:37:48.889 net/cxgbe: not in enabled drivers build config 00:37:48.889 net/dpaa: not in enabled drivers build config 00:37:48.889 net/dpaa2: not in enabled drivers build config 00:37:48.889 net/e1000: not in enabled drivers build config 00:37:48.889 net/ena: not in enabled drivers build config 00:37:48.889 net/enetc: not in enabled drivers build config 00:37:48.889 net/enetfec: not in enabled drivers build config 00:37:48.889 net/enic: not in enabled drivers build config 00:37:48.889 net/failsafe: not in enabled drivers build config 00:37:48.889 net/fm10k: not in enabled drivers build config 00:37:48.889 net/gve: not in enabled drivers build config 00:37:48.889 net/hinic: not in enabled drivers build config 00:37:48.889 net/hns3: not in enabled drivers build config 00:37:48.889 net/i40e: not in enabled drivers build config 00:37:48.889 net/iavf: not in enabled drivers build config 00:37:48.889 net/ice: not in enabled drivers build config 00:37:48.889 net/idpf: not in enabled drivers build config 00:37:48.889 net/igc: not in enabled drivers build config 00:37:48.889 net/ionic: not in enabled drivers build config 00:37:48.889 net/ipn3ke: not in enabled drivers build config 00:37:48.889 net/ixgbe: not in enabled drivers build config 00:37:48.889 net/mana: not in enabled drivers build config 00:37:48.889 net/memif: not in enabled drivers build config 00:37:48.889 net/mlx4: not in enabled drivers build config 00:37:48.889 net/mlx5: not in enabled drivers build config 00:37:48.889 net/mvneta: not in enabled drivers build config 00:37:48.889 net/mvpp2: not in enabled drivers build config 00:37:48.889 net/netvsc: not in enabled drivers build config 00:37:48.889 net/nfb: not in enabled drivers build config 00:37:48.889 net/nfp: not in enabled drivers build config 00:37:48.889 net/ngbe: not in enabled drivers build config 00:37:48.889 net/null: not in enabled drivers build config 00:37:48.889 net/octeontx: not in enabled drivers build config 00:37:48.889 net/octeon_ep: not in enabled drivers build config 00:37:48.889 net/pcap: not in enabled drivers build config 00:37:48.889 net/pfe: not in enabled drivers build config 00:37:48.889 net/qede: not in enabled drivers build config 00:37:48.889 net/ring: not in enabled drivers build config 00:37:48.889 net/sfc: not in enabled drivers build config 00:37:48.889 net/softnic: not in enabled drivers build config 00:37:48.889 net/tap: not in enabled drivers build config 00:37:48.889 net/thunderx: not in enabled drivers build config 00:37:48.889 net/txgbe: not in enabled drivers build config 00:37:48.889 net/vdev_netvsc: not in enabled drivers build config 00:37:48.889 net/vhost: not in enabled drivers build config 00:37:48.889 net/virtio: not in enabled drivers build config 00:37:48.889 net/vmxnet3: not in enabled drivers build config 00:37:48.889 raw/*: missing internal dependency, "rawdev" 00:37:48.890 crypto/armv8: not in enabled drivers build config 00:37:48.890 crypto/bcmfs: not in enabled drivers build config 00:37:48.890 crypto/caam_jr: not in enabled drivers build config 00:37:48.890 crypto/ccp: not in enabled drivers build config 00:37:48.890 crypto/cnxk: not in enabled drivers build config 00:37:48.890 crypto/dpaa_sec: not in enabled drivers build config 00:37:48.890 crypto/dpaa2_sec: not in enabled drivers build config 00:37:48.890 crypto/ipsec_mb: not in enabled drivers build config 00:37:48.890 crypto/mlx5: not in enabled drivers build config 00:37:48.890 crypto/mvsam: not in enabled drivers build config 00:37:48.890 crypto/nitrox: not in enabled drivers build config 00:37:48.890 crypto/null: not in enabled drivers build config 00:37:48.890 crypto/octeontx: not in enabled drivers build config 00:37:48.890 crypto/openssl: not in enabled drivers build config 00:37:48.890 crypto/scheduler: not in enabled drivers build config 00:37:48.890 crypto/uadk: not in enabled drivers build config 00:37:48.890 crypto/virtio: not in enabled drivers build config 00:37:48.890 compress/isal: not in enabled drivers build config 00:37:48.890 compress/mlx5: not in enabled drivers build config 00:37:48.890 compress/nitrox: not in enabled drivers build config 00:37:48.890 compress/octeontx: not in enabled drivers build config 00:37:48.890 compress/zlib: not in enabled drivers build config 00:37:48.890 regex/*: missing internal dependency, "regexdev" 00:37:48.890 ml/*: missing internal dependency, "mldev" 00:37:48.890 vdpa/ifc: not in enabled drivers build config 00:37:48.890 vdpa/mlx5: not in enabled drivers build config 00:37:48.890 vdpa/nfp: not in enabled drivers build config 00:37:48.890 vdpa/sfc: not in enabled drivers build config 00:37:48.890 event/*: missing internal dependency, "eventdev" 00:37:48.890 baseband/*: missing internal dependency, "bbdev" 00:37:48.890 gpu/*: missing internal dependency, "gpudev" 00:37:48.890 00:37:48.890 00:37:48.890 Build targets in project: 85 00:37:48.890 00:37:48.890 DPDK 24.03.0 00:37:48.890 00:37:48.890 User defined options 00:37:48.890 buildtype : debug 00:37:48.890 default_library : shared 00:37:48.890 libdir : lib 00:37:48.890 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:37:48.890 b_sanitize : address 00:37:48.890 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:37:48.890 c_link_args : 00:37:48.890 cpu_instruction_set: native 00:37:48.890 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:37:48.890 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:37:48.890 enable_docs : false 00:37:48.890 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:37:48.890 enable_kmods : false 00:37:48.890 max_lcores : 128 00:37:48.890 tests : false 00:37:48.890 00:37:48.890 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:37:49.394 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:37:49.394 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:37:49.394 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:37:49.394 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:37:49.394 [4/268] Linking static target lib/librte_kvargs.a 00:37:49.394 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:37:49.394 [6/268] Linking static target lib/librte_log.a 00:37:50.139 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:37:50.139 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:37:50.139 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:37:50.398 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:37:50.398 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:37:50.398 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:37:50.657 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:37:50.657 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:37:50.657 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:37:50.657 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:37:50.657 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:37:50.657 [18/268] Linking static target lib/librte_telemetry.a 00:37:50.657 [19/268] Linking target lib/librte_log.so.24.1 00:37:50.922 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:37:50.922 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:37:51.180 [22/268] Linking target lib/librte_kvargs.so.24.1 00:37:51.439 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:37:51.439 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:37:51.439 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:37:51.697 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:37:51.697 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:37:51.697 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:37:51.956 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:37:51.956 [30/268] Linking target lib/librte_telemetry.so.24.1 00:37:51.956 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:37:52.216 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:37:52.216 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:37:52.476 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:37:52.476 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:37:52.476 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:37:52.735 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:37:52.735 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:37:52.735 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:37:52.735 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:37:52.735 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:37:52.735 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:37:52.994 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:37:52.994 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:37:53.253 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:37:53.253 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:37:53.253 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:37:53.512 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:37:53.512 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:37:53.512 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:37:53.772 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:37:54.031 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:37:54.031 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:37:54.290 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:37:54.290 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:37:54.290 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:37:54.550 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:37:54.550 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:37:54.808 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:37:54.808 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:37:54.808 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:37:54.808 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:37:54.808 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:37:54.808 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:37:55.189 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:37:55.189 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:37:55.189 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:37:55.447 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:37:55.447 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:37:55.447 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:37:55.707 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:37:55.707 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:37:55.707 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:37:55.965 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:37:55.965 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:37:55.965 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:37:55.965 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:37:56.224 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:37:56.224 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:37:56.224 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:37:56.224 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:37:56.482 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:37:56.482 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:37:56.740 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:37:56.740 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:37:56.740 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:37:56.740 [87/268] Linking static target lib/librte_ring.a 00:37:56.998 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:37:56.998 [89/268] Linking static target lib/librte_eal.a 00:37:56.998 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:37:56.998 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:37:57.256 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:37:57.256 [93/268] Linking static target lib/librte_mempool.a 00:37:57.256 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:37:57.256 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:37:57.256 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:37:57.514 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:37:57.514 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:37:57.514 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:37:57.514 [100/268] Linking static target lib/librte_rcu.a 00:37:58.080 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:37:58.080 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:37:58.080 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:37:58.080 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:37:58.080 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:37:58.080 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:37:58.339 [107/268] Linking static target lib/librte_meter.a 00:37:58.339 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:37:58.339 [109/268] Linking static target lib/librte_net.a 00:37:58.597 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:37:58.597 [111/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:37:58.855 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:37:58.856 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:37:58.856 [114/268] Linking static target lib/librte_mbuf.a 00:37:58.856 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:37:58.856 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:37:59.113 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:37:59.113 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:37:59.679 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:37:59.937 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:37:59.937 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:37:59.937 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:37:59.937 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:38:00.196 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:38:00.196 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:38:00.196 [126/268] Linking static target lib/librte_pci.a 00:38:00.453 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:38:00.712 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:38:00.712 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:38:00.712 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:38:00.712 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:38:00.970 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:38:00.970 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:38:00.970 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:38:00.970 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:38:00.970 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:38:00.970 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:38:01.228 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:38:01.228 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:38:01.228 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:38:01.228 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:38:01.228 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:38:01.228 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:38:01.487 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:38:01.487 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:38:01.755 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:38:01.755 [147/268] Linking static target lib/librte_cmdline.a 00:38:02.321 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:38:02.321 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:38:02.321 [150/268] Linking static target lib/librte_timer.a 00:38:02.321 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:38:02.578 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:38:02.578 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:38:02.836 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:38:02.836 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:38:02.836 [156/268] Linking static target lib/librte_hash.a 00:38:03.097 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:38:03.363 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:38:03.363 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:38:03.363 [160/268] Linking static target lib/librte_ethdev.a 00:38:03.363 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:38:03.621 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:38:03.621 [163/268] Linking static target lib/librte_compressdev.a 00:38:03.621 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:38:03.878 [165/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:38:03.878 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:38:04.136 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:38:04.136 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:38:04.136 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:38:04.393 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:38:04.393 [171/268] Linking static target lib/librte_dmadev.a 00:38:04.393 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:38:04.651 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:38:04.651 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:38:04.909 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:38:05.166 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:38:05.424 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:38:05.424 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:38:05.424 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:38:05.681 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:38:05.681 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:38:05.681 [182/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:38:05.939 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:38:05.939 [184/268] Linking static target lib/librte_cryptodev.a 00:38:06.197 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:38:06.197 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:38:06.197 [187/268] Linking static target lib/librte_power.a 00:38:06.197 [188/268] Linking static target lib/librte_reorder.a 00:38:06.454 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:38:06.712 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:38:06.970 [191/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:38:07.227 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:38:07.485 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:38:07.485 [194/268] Linking static target lib/librte_security.a 00:38:07.485 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:38:07.743 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:38:08.000 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:38:08.000 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:38:08.269 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:38:08.269 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:38:08.534 [201/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:38:08.534 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:38:08.792 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:38:09.050 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:38:09.050 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:38:09.050 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:38:09.308 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:38:09.582 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:38:09.582 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:38:09.840 [210/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:38:09.840 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:38:09.840 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:38:09.840 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:38:10.098 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:38:10.098 [215/268] Linking static target drivers/librte_bus_vdev.a 00:38:10.098 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:38:10.098 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:38:10.356 [218/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:38:10.356 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:38:10.356 [220/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:38:10.356 [221/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:38:10.356 [222/268] Linking static target drivers/librte_mempool_ring.a 00:38:10.356 [223/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:38:10.356 [224/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:38:10.356 [225/268] Linking static target drivers/librte_bus_pci.a 00:38:10.356 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:38:10.923 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:38:11.182 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:38:11.441 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:38:11.441 [230/268] Linking target lib/librte_eal.so.24.1 00:38:11.699 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:38:11.699 [232/268] Linking target lib/librte_timer.so.24.1 00:38:11.699 [233/268] Linking target lib/librte_pci.so.24.1 00:38:11.699 [234/268] Linking target lib/librte_meter.so.24.1 00:38:11.699 [235/268] Linking target lib/librte_ring.so.24.1 00:38:11.699 [236/268] Linking target lib/librte_dmadev.so.24.1 00:38:11.958 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:38:11.958 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:38:11.958 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:38:11.958 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:38:11.958 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:38:11.958 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:38:11.958 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:38:12.217 [244/268] Linking target lib/librte_rcu.so.24.1 00:38:12.217 [245/268] Linking target lib/librte_mempool.so.24.1 00:38:12.475 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:38:12.475 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:38:12.475 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:38:12.475 [249/268] Linking target lib/librte_mbuf.so.24.1 00:38:12.734 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:38:12.734 [251/268] Linking target lib/librte_reorder.so.24.1 00:38:12.734 [252/268] Linking target lib/librte_compressdev.so.24.1 00:38:12.734 [253/268] Linking target lib/librte_net.so.24.1 00:38:12.734 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:38:12.993 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:38:12.993 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:38:12.993 [257/268] Linking target lib/librte_security.so.24.1 00:38:12.993 [258/268] Linking target lib/librte_cmdline.so.24.1 00:38:12.993 [259/268] Linking target lib/librte_hash.so.24.1 00:38:13.250 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:38:13.250 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:38:13.250 [262/268] Linking target lib/librte_ethdev.so.24.1 00:38:13.508 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:38:13.508 [264/268] Linking target lib/librte_power.so.24.1 00:38:16.040 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:38:16.040 [266/268] Linking static target lib/librte_vhost.a 00:38:17.943 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:38:17.943 [268/268] Linking target lib/librte_vhost.so.24.1 00:38:17.943 INFO: autodetecting backend as ninja 00:38:17.943 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:38:44.502 CC lib/log/log.o 00:38:44.502 CC lib/log/log_flags.o 00:38:44.502 CC lib/log/log_deprecated.o 00:38:44.502 CC lib/ut_mock/mock.o 00:38:44.502 CC lib/ut/ut.o 00:38:44.502 LIB libspdk_ut_mock.a 00:38:44.502 LIB libspdk_log.a 00:38:44.502 SO libspdk_ut_mock.so.6.0 00:38:44.502 LIB libspdk_ut.a 00:38:44.502 SO libspdk_ut.so.2.0 00:38:44.502 SO libspdk_log.so.7.1 00:38:44.502 SYMLINK libspdk_ut_mock.so 00:38:44.502 SYMLINK libspdk_ut.so 00:38:44.502 SYMLINK libspdk_log.so 00:38:44.502 CC lib/util/base64.o 00:38:44.502 CC lib/util/bit_array.o 00:38:44.502 CC lib/util/cpuset.o 00:38:44.502 CC lib/util/crc16.o 00:38:44.502 CXX lib/trace_parser/trace.o 00:38:44.502 CC lib/util/crc32.o 00:38:44.502 CC lib/util/crc32c.o 00:38:44.503 CC lib/ioat/ioat.o 00:38:44.503 CC lib/dma/dma.o 00:38:44.503 CC lib/util/crc32_ieee.o 00:38:44.503 CC lib/vfio_user/host/vfio_user_pci.o 00:38:44.503 CC lib/vfio_user/host/vfio_user.o 00:38:44.503 CC lib/util/crc64.o 00:38:44.503 CC lib/util/dif.o 00:38:44.503 CC lib/util/fd.o 00:38:44.503 CC lib/util/fd_group.o 00:38:44.503 CC lib/util/file.o 00:38:44.503 CC lib/util/hexlify.o 00:38:44.503 LIB libspdk_ioat.a 00:38:44.503 LIB libspdk_dma.a 00:38:44.503 SO libspdk_ioat.so.7.0 00:38:44.503 CC lib/util/iov.o 00:38:44.503 SO libspdk_dma.so.5.0 00:38:44.503 CC lib/util/math.o 00:38:44.503 SYMLINK libspdk_ioat.so 00:38:44.503 CC lib/util/net.o 00:38:44.503 CC lib/util/pipe.o 00:38:44.503 SYMLINK libspdk_dma.so 00:38:44.503 CC lib/util/strerror_tls.o 00:38:44.503 CC lib/util/string.o 00:38:44.503 CC lib/util/uuid.o 00:38:44.503 LIB libspdk_vfio_user.a 00:38:44.503 CC lib/util/xor.o 00:38:44.503 SO libspdk_vfio_user.so.5.0 00:38:44.503 CC lib/util/zipf.o 00:38:44.503 SYMLINK libspdk_vfio_user.so 00:38:44.503 CC lib/util/md5.o 00:38:44.503 LIB libspdk_trace_parser.a 00:38:44.503 SO libspdk_trace_parser.so.6.0 00:38:44.503 LIB libspdk_util.a 00:38:44.503 SYMLINK libspdk_trace_parser.so 00:38:44.503 SO libspdk_util.so.10.1 00:38:44.503 SYMLINK libspdk_util.so 00:38:44.503 CC lib/vmd/vmd.o 00:38:44.503 CC lib/vmd/led.o 00:38:44.503 CC lib/rdma_utils/rdma_utils.o 00:38:44.503 CC lib/env_dpdk/env.o 00:38:44.503 CC lib/idxd/idxd.o 00:38:44.503 CC lib/env_dpdk/memory.o 00:38:44.503 CC lib/idxd/idxd_kernel.o 00:38:44.503 CC lib/idxd/idxd_user.o 00:38:44.503 CC lib/conf/conf.o 00:38:44.503 CC lib/json/json_parse.o 00:38:44.503 CC lib/json/json_util.o 00:38:44.503 CC lib/json/json_write.o 00:38:44.503 CC lib/env_dpdk/pci.o 00:38:44.503 LIB libspdk_rdma_utils.a 00:38:44.503 LIB libspdk_conf.a 00:38:44.503 CC lib/env_dpdk/init.o 00:38:44.503 SO libspdk_rdma_utils.so.1.0 00:38:44.503 SO libspdk_conf.so.6.0 00:38:44.503 SYMLINK libspdk_rdma_utils.so 00:38:44.503 SYMLINK libspdk_conf.so 00:38:44.503 CC lib/env_dpdk/threads.o 00:38:44.503 CC lib/env_dpdk/pci_ioat.o 00:38:44.503 LIB libspdk_json.a 00:38:44.503 CC lib/rdma_provider/common.o 00:38:44.773 CC lib/rdma_provider/rdma_provider_verbs.o 00:38:44.773 SO libspdk_json.so.6.0 00:38:44.773 CC lib/env_dpdk/pci_virtio.o 00:38:44.773 SYMLINK libspdk_json.so 00:38:44.773 CC lib/env_dpdk/pci_vmd.o 00:38:44.773 CC lib/env_dpdk/pci_idxd.o 00:38:44.773 CC lib/env_dpdk/pci_event.o 00:38:44.773 CC lib/env_dpdk/sigbus_handler.o 00:38:44.773 LIB libspdk_idxd.a 00:38:44.773 LIB libspdk_rdma_provider.a 00:38:45.031 CC lib/env_dpdk/pci_dpdk.o 00:38:45.031 SO libspdk_rdma_provider.so.7.0 00:38:45.031 CC lib/env_dpdk/pci_dpdk_2207.o 00:38:45.031 SO libspdk_idxd.so.12.1 00:38:45.031 LIB libspdk_vmd.a 00:38:45.031 CC lib/env_dpdk/pci_dpdk_2211.o 00:38:45.031 SO libspdk_vmd.so.6.0 00:38:45.031 SYMLINK libspdk_rdma_provider.so 00:38:45.031 SYMLINK libspdk_idxd.so 00:38:45.031 CC lib/jsonrpc/jsonrpc_server.o 00:38:45.031 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:38:45.031 CC lib/jsonrpc/jsonrpc_client.o 00:38:45.031 SYMLINK libspdk_vmd.so 00:38:45.031 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:38:45.290 LIB libspdk_jsonrpc.a 00:38:45.548 SO libspdk_jsonrpc.so.6.0 00:38:45.548 SYMLINK libspdk_jsonrpc.so 00:38:45.806 CC lib/rpc/rpc.o 00:38:46.065 LIB libspdk_rpc.a 00:38:46.065 LIB libspdk_env_dpdk.a 00:38:46.065 SO libspdk_rpc.so.6.0 00:38:46.324 SYMLINK libspdk_rpc.so 00:38:46.324 SO libspdk_env_dpdk.so.15.1 00:38:46.324 SYMLINK libspdk_env_dpdk.so 00:38:46.583 CC lib/trace/trace_flags.o 00:38:46.583 CC lib/trace/trace.o 00:38:46.583 CC lib/notify/notify.o 00:38:46.583 CC lib/notify/notify_rpc.o 00:38:46.583 CC lib/keyring/keyring.o 00:38:46.583 CC lib/trace/trace_rpc.o 00:38:46.583 CC lib/keyring/keyring_rpc.o 00:38:46.842 LIB libspdk_notify.a 00:38:46.842 SO libspdk_notify.so.6.0 00:38:46.842 LIB libspdk_keyring.a 00:38:46.842 SYMLINK libspdk_notify.so 00:38:46.842 LIB libspdk_trace.a 00:38:46.842 SO libspdk_keyring.so.2.0 00:38:46.842 SO libspdk_trace.so.11.0 00:38:46.842 SYMLINK libspdk_keyring.so 00:38:47.100 SYMLINK libspdk_trace.so 00:38:47.100 CC lib/sock/sock_rpc.o 00:38:47.100 CC lib/sock/sock.o 00:38:47.100 CC lib/thread/thread.o 00:38:47.100 CC lib/thread/iobuf.o 00:38:48.034 LIB libspdk_sock.a 00:38:48.034 SO libspdk_sock.so.10.0 00:38:48.034 SYMLINK libspdk_sock.so 00:38:48.291 CC lib/nvme/nvme_ctrlr_cmd.o 00:38:48.291 CC lib/nvme/nvme_ctrlr.o 00:38:48.291 CC lib/nvme/nvme_fabric.o 00:38:48.291 CC lib/nvme/nvme_ns_cmd.o 00:38:48.291 CC lib/nvme/nvme_ns.o 00:38:48.291 CC lib/nvme/nvme_pcie_common.o 00:38:48.291 CC lib/nvme/nvme_pcie.o 00:38:48.291 CC lib/nvme/nvme_qpair.o 00:38:48.291 CC lib/nvme/nvme.o 00:38:49.224 CC lib/nvme/nvme_quirks.o 00:38:49.224 CC lib/nvme/nvme_transport.o 00:38:49.483 LIB libspdk_thread.a 00:38:49.483 SO libspdk_thread.so.11.0 00:38:49.483 CC lib/nvme/nvme_discovery.o 00:38:49.483 SYMLINK libspdk_thread.so 00:38:49.483 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:38:49.741 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:38:49.741 CC lib/nvme/nvme_tcp.o 00:38:50.000 CC lib/nvme/nvme_opal.o 00:38:50.000 CC lib/nvme/nvme_io_msg.o 00:38:50.000 CC lib/nvme/nvme_poll_group.o 00:38:50.258 CC lib/nvme/nvme_zns.o 00:38:50.259 CC lib/nvme/nvme_stubs.o 00:38:50.259 CC lib/nvme/nvme_auth.o 00:38:50.259 CC lib/nvme/nvme_cuse.o 00:38:50.518 CC lib/nvme/nvme_rdma.o 00:38:50.777 CC lib/accel/accel.o 00:38:50.777 CC lib/accel/accel_rpc.o 00:38:50.777 CC lib/accel/accel_sw.o 00:38:50.777 CC lib/blob/blobstore.o 00:38:51.036 CC lib/init/json_config.o 00:38:51.294 CC lib/virtio/virtio.o 00:38:51.294 CC lib/virtio/virtio_vhost_user.o 00:38:51.294 CC lib/init/subsystem.o 00:38:51.552 CC lib/init/subsystem_rpc.o 00:38:51.552 CC lib/blob/request.o 00:38:51.552 CC lib/blob/zeroes.o 00:38:51.552 CC lib/init/rpc.o 00:38:51.552 CC lib/blob/blob_bs_dev.o 00:38:51.552 CC lib/virtio/virtio_vfio_user.o 00:38:51.810 LIB libspdk_init.a 00:38:51.810 CC lib/virtio/virtio_pci.o 00:38:51.810 SO libspdk_init.so.6.0 00:38:51.810 SYMLINK libspdk_init.so 00:38:52.069 CC lib/fsdev/fsdev.o 00:38:52.069 CC lib/fsdev/fsdev_io.o 00:38:52.069 CC lib/fsdev/fsdev_rpc.o 00:38:52.069 CC lib/event/reactor.o 00:38:52.069 CC lib/event/app.o 00:38:52.069 CC lib/event/log_rpc.o 00:38:52.328 LIB libspdk_accel.a 00:38:52.328 LIB libspdk_virtio.a 00:38:52.328 SO libspdk_accel.so.16.0 00:38:52.328 SO libspdk_virtio.so.7.0 00:38:52.328 SYMLINK libspdk_virtio.so 00:38:52.328 CC lib/event/app_rpc.o 00:38:52.328 SYMLINK libspdk_accel.so 00:38:52.328 CC lib/event/scheduler_static.o 00:38:52.587 CC lib/bdev/bdev.o 00:38:52.587 CC lib/bdev/bdev_zone.o 00:38:52.587 CC lib/bdev/bdev_rpc.o 00:38:52.587 CC lib/bdev/part.o 00:38:52.587 CC lib/bdev/scsi_nvme.o 00:38:52.846 LIB libspdk_nvme.a 00:38:52.846 LIB libspdk_event.a 00:38:52.846 LIB libspdk_fsdev.a 00:38:52.846 SO libspdk_event.so.14.0 00:38:52.846 SO libspdk_fsdev.so.2.0 00:38:52.846 SYMLINK libspdk_event.so 00:38:52.846 SO libspdk_nvme.so.15.0 00:38:52.846 SYMLINK libspdk_fsdev.so 00:38:53.105 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:38:53.363 SYMLINK libspdk_nvme.so 00:38:54.300 LIB libspdk_fuse_dispatcher.a 00:38:54.300 SO libspdk_fuse_dispatcher.so.1.0 00:38:54.300 SYMLINK libspdk_fuse_dispatcher.so 00:38:55.676 LIB libspdk_blob.a 00:38:55.676 SO libspdk_blob.so.12.0 00:38:55.676 SYMLINK libspdk_blob.so 00:38:55.934 CC lib/blobfs/blobfs.o 00:38:55.934 CC lib/blobfs/tree.o 00:38:55.934 CC lib/lvol/lvol.o 00:38:56.872 LIB libspdk_bdev.a 00:38:56.872 SO libspdk_bdev.so.17.0 00:38:56.872 SYMLINK libspdk_bdev.so 00:38:57.131 CC lib/scsi/dev.o 00:38:57.131 CC lib/scsi/port.o 00:38:57.131 CC lib/scsi/lun.o 00:38:57.131 CC lib/scsi/scsi.o 00:38:57.131 CC lib/ftl/ftl_core.o 00:38:57.131 CC lib/nbd/nbd.o 00:38:57.131 CC lib/nvmf/ctrlr.o 00:38:57.131 CC lib/ublk/ublk.o 00:38:57.131 LIB libspdk_blobfs.a 00:38:57.131 SO libspdk_blobfs.so.11.0 00:38:57.389 SYMLINK libspdk_blobfs.so 00:38:57.389 CC lib/ublk/ublk_rpc.o 00:38:57.389 LIB libspdk_lvol.a 00:38:57.389 CC lib/nbd/nbd_rpc.o 00:38:57.389 SO libspdk_lvol.so.11.0 00:38:57.389 SYMLINK libspdk_lvol.so 00:38:57.389 CC lib/nvmf/ctrlr_discovery.o 00:38:57.389 CC lib/scsi/scsi_bdev.o 00:38:57.648 CC lib/scsi/scsi_pr.o 00:38:57.648 CC lib/scsi/scsi_rpc.o 00:38:57.648 CC lib/ftl/ftl_init.o 00:38:57.648 CC lib/scsi/task.o 00:38:57.648 CC lib/nvmf/ctrlr_bdev.o 00:38:57.907 CC lib/nvmf/subsystem.o 00:38:57.907 CC lib/ftl/ftl_layout.o 00:38:57.907 CC lib/ftl/ftl_debug.o 00:38:58.166 CC lib/ftl/ftl_io.o 00:38:58.166 LIB libspdk_nbd.a 00:38:58.166 CC lib/ftl/ftl_sb.o 00:38:58.166 SO libspdk_nbd.so.7.0 00:38:58.166 LIB libspdk_ublk.a 00:38:58.166 SO libspdk_ublk.so.3.0 00:38:58.166 SYMLINK libspdk_nbd.so 00:38:58.166 CC lib/ftl/ftl_l2p.o 00:38:58.166 CC lib/nvmf/nvmf.o 00:38:58.424 SYMLINK libspdk_ublk.so 00:38:58.424 CC lib/nvmf/nvmf_rpc.o 00:38:58.424 CC lib/ftl/ftl_l2p_flat.o 00:38:58.424 CC lib/ftl/ftl_nv_cache.o 00:38:58.424 CC lib/ftl/ftl_band.o 00:38:58.683 CC lib/ftl/ftl_band_ops.o 00:38:58.683 LIB libspdk_scsi.a 00:38:58.683 CC lib/ftl/ftl_writer.o 00:38:58.683 CC lib/ftl/ftl_rq.o 00:38:58.683 SO libspdk_scsi.so.9.0 00:38:58.942 SYMLINK libspdk_scsi.so 00:38:58.942 CC lib/nvmf/transport.o 00:38:58.942 CC lib/ftl/ftl_reloc.o 00:38:58.942 CC lib/ftl/ftl_l2p_cache.o 00:38:59.201 CC lib/iscsi/conn.o 00:38:59.201 CC lib/iscsi/init_grp.o 00:38:59.460 CC lib/iscsi/iscsi.o 00:38:59.719 CC lib/iscsi/param.o 00:38:59.719 CC lib/iscsi/portal_grp.o 00:38:59.978 CC lib/vhost/vhost.o 00:38:59.978 CC lib/vhost/vhost_rpc.o 00:39:00.238 CC lib/ftl/ftl_p2l.o 00:39:00.238 CC lib/iscsi/tgt_node.o 00:39:00.238 CC lib/iscsi/iscsi_subsystem.o 00:39:00.238 CC lib/iscsi/iscsi_rpc.o 00:39:00.238 CC lib/nvmf/tcp.o 00:39:00.238 CC lib/nvmf/stubs.o 00:39:00.496 CC lib/ftl/ftl_p2l_log.o 00:39:00.496 CC lib/nvmf/mdns_server.o 00:39:00.756 CC lib/vhost/vhost_scsi.o 00:39:00.756 CC lib/nvmf/rdma.o 00:39:00.756 CC lib/nvmf/auth.o 00:39:01.097 CC lib/ftl/mngt/ftl_mngt.o 00:39:01.097 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:39:01.097 CC lib/vhost/vhost_blk.o 00:39:01.097 CC lib/vhost/rte_vhost_user.o 00:39:01.097 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:39:01.565 CC lib/ftl/mngt/ftl_mngt_startup.o 00:39:01.565 CC lib/iscsi/task.o 00:39:01.565 CC lib/ftl/mngt/ftl_mngt_md.o 00:39:01.565 CC lib/ftl/mngt/ftl_mngt_misc.o 00:39:01.565 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:39:01.565 LIB libspdk_iscsi.a 00:39:01.825 SO libspdk_iscsi.so.8.0 00:39:01.825 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:39:01.825 CC lib/ftl/mngt/ftl_mngt_band.o 00:39:02.085 SYMLINK libspdk_iscsi.so 00:39:02.085 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:39:02.085 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:39:02.085 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:39:02.085 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:39:02.343 CC lib/ftl/utils/ftl_conf.o 00:39:02.343 CC lib/ftl/utils/ftl_md.o 00:39:02.343 CC lib/ftl/utils/ftl_mempool.o 00:39:02.343 CC lib/ftl/utils/ftl_bitmap.o 00:39:02.343 CC lib/ftl/utils/ftl_property.o 00:39:02.602 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:39:02.602 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:39:02.602 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:39:02.602 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:39:02.860 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:39:02.860 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:39:02.860 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:39:02.860 CC lib/ftl/upgrade/ftl_sb_v3.o 00:39:02.860 CC lib/ftl/upgrade/ftl_sb_v5.o 00:39:02.860 CC lib/ftl/nvc/ftl_nvc_dev.o 00:39:02.860 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:39:03.117 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:39:03.117 LIB libspdk_vhost.a 00:39:03.117 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:39:03.117 CC lib/ftl/base/ftl_base_dev.o 00:39:03.117 SO libspdk_vhost.so.8.0 00:39:03.117 CC lib/ftl/base/ftl_base_bdev.o 00:39:03.117 CC lib/ftl/ftl_trace.o 00:39:03.382 SYMLINK libspdk_vhost.so 00:39:03.641 LIB libspdk_ftl.a 00:39:03.899 SO libspdk_ftl.so.9.0 00:39:03.899 LIB libspdk_nvmf.a 00:39:04.157 SO libspdk_nvmf.so.20.0 00:39:04.157 SYMLINK libspdk_ftl.so 00:39:04.415 SYMLINK libspdk_nvmf.so 00:39:04.674 CC module/env_dpdk/env_dpdk_rpc.o 00:39:04.933 CC module/fsdev/aio/fsdev_aio.o 00:39:04.933 CC module/accel/error/accel_error.o 00:39:04.933 CC module/sock/posix/posix.o 00:39:04.933 CC module/keyring/file/keyring.o 00:39:04.933 CC module/blob/bdev/blob_bdev.o 00:39:04.933 CC module/keyring/linux/keyring.o 00:39:04.933 CC module/accel/dsa/accel_dsa.o 00:39:04.933 CC module/accel/ioat/accel_ioat.o 00:39:04.933 CC module/scheduler/dynamic/scheduler_dynamic.o 00:39:04.933 LIB libspdk_env_dpdk_rpc.a 00:39:04.933 SO libspdk_env_dpdk_rpc.so.6.0 00:39:04.933 CC module/keyring/file/keyring_rpc.o 00:39:05.192 SYMLINK libspdk_env_dpdk_rpc.so 00:39:05.192 CC module/keyring/linux/keyring_rpc.o 00:39:05.192 CC module/accel/ioat/accel_ioat_rpc.o 00:39:05.192 CC module/fsdev/aio/fsdev_aio_rpc.o 00:39:05.192 CC module/accel/error/accel_error_rpc.o 00:39:05.192 LIB libspdk_scheduler_dynamic.a 00:39:05.192 SO libspdk_scheduler_dynamic.so.4.0 00:39:05.192 LIB libspdk_blob_bdev.a 00:39:05.192 SYMLINK libspdk_scheduler_dynamic.so 00:39:05.192 LIB libspdk_keyring_linux.a 00:39:05.192 LIB libspdk_accel_ioat.a 00:39:05.192 SO libspdk_blob_bdev.so.12.0 00:39:05.192 LIB libspdk_keyring_file.a 00:39:05.192 SO libspdk_keyring_linux.so.1.0 00:39:05.192 LIB libspdk_accel_error.a 00:39:05.192 SO libspdk_accel_ioat.so.6.0 00:39:05.192 SO libspdk_keyring_file.so.2.0 00:39:05.192 CC module/accel/dsa/accel_dsa_rpc.o 00:39:05.192 SO libspdk_accel_error.so.2.0 00:39:05.450 SYMLINK libspdk_keyring_linux.so 00:39:05.450 SYMLINK libspdk_blob_bdev.so 00:39:05.450 CC module/fsdev/aio/linux_aio_mgr.o 00:39:05.450 SYMLINK libspdk_accel_ioat.so 00:39:05.450 SYMLINK libspdk_keyring_file.so 00:39:05.450 SYMLINK libspdk_accel_error.so 00:39:05.450 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:39:05.450 LIB libspdk_accel_dsa.a 00:39:05.450 CC module/accel/iaa/accel_iaa.o 00:39:05.450 CC module/accel/iaa/accel_iaa_rpc.o 00:39:05.450 SO libspdk_accel_dsa.so.5.0 00:39:05.708 CC module/scheduler/gscheduler/gscheduler.o 00:39:05.708 SYMLINK libspdk_accel_dsa.so 00:39:05.708 LIB libspdk_scheduler_dpdk_governor.a 00:39:05.708 SO libspdk_scheduler_dpdk_governor.so.4.0 00:39:05.708 CC module/blobfs/bdev/blobfs_bdev.o 00:39:05.708 CC module/bdev/delay/vbdev_delay.o 00:39:05.708 LIB libspdk_fsdev_aio.a 00:39:05.708 SYMLINK libspdk_scheduler_dpdk_governor.so 00:39:05.708 LIB libspdk_accel_iaa.a 00:39:05.708 SO libspdk_fsdev_aio.so.1.0 00:39:05.708 SO libspdk_accel_iaa.so.3.0 00:39:05.708 CC module/bdev/error/vbdev_error.o 00:39:05.708 LIB libspdk_scheduler_gscheduler.a 00:39:05.708 LIB libspdk_sock_posix.a 00:39:05.967 CC module/bdev/gpt/gpt.o 00:39:05.967 SO libspdk_scheduler_gscheduler.so.4.0 00:39:05.967 CC module/bdev/lvol/vbdev_lvol.o 00:39:05.967 SO libspdk_sock_posix.so.6.0 00:39:05.967 SYMLINK libspdk_fsdev_aio.so 00:39:05.967 CC module/bdev/gpt/vbdev_gpt.o 00:39:05.967 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:39:05.967 SYMLINK libspdk_accel_iaa.so 00:39:05.967 SYMLINK libspdk_scheduler_gscheduler.so 00:39:05.967 CC module/bdev/delay/vbdev_delay_rpc.o 00:39:05.967 CC module/bdev/malloc/bdev_malloc.o 00:39:05.967 SYMLINK libspdk_sock_posix.so 00:39:06.226 CC module/bdev/malloc/bdev_malloc_rpc.o 00:39:06.226 CC module/bdev/null/bdev_null.o 00:39:06.226 LIB libspdk_blobfs_bdev.a 00:39:06.226 CC module/bdev/nvme/bdev_nvme.o 00:39:06.226 SO libspdk_blobfs_bdev.so.6.0 00:39:06.226 CC module/bdev/error/vbdev_error_rpc.o 00:39:06.226 LIB libspdk_bdev_gpt.a 00:39:06.226 CC module/bdev/nvme/bdev_nvme_rpc.o 00:39:06.226 SYMLINK libspdk_blobfs_bdev.so 00:39:06.226 CC module/bdev/nvme/nvme_rpc.o 00:39:06.226 SO libspdk_bdev_gpt.so.6.0 00:39:06.226 LIB libspdk_bdev_delay.a 00:39:06.484 SO libspdk_bdev_delay.so.6.0 00:39:06.484 SYMLINK libspdk_bdev_gpt.so 00:39:06.484 SYMLINK libspdk_bdev_delay.so 00:39:06.484 CC module/bdev/null/bdev_null_rpc.o 00:39:06.484 LIB libspdk_bdev_malloc.a 00:39:06.484 LIB libspdk_bdev_error.a 00:39:06.484 SO libspdk_bdev_error.so.6.0 00:39:06.484 SO libspdk_bdev_malloc.so.6.0 00:39:06.484 CC module/bdev/passthru/vbdev_passthru.o 00:39:06.484 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:39:06.743 CC module/bdev/raid/bdev_raid.o 00:39:06.743 SYMLINK libspdk_bdev_error.so 00:39:06.743 SYMLINK libspdk_bdev_malloc.so 00:39:06.743 LIB libspdk_bdev_null.a 00:39:06.743 CC module/bdev/split/vbdev_split.o 00:39:06.743 SO libspdk_bdev_null.so.6.0 00:39:06.743 SYMLINK libspdk_bdev_null.so 00:39:06.743 CC module/bdev/split/vbdev_split_rpc.o 00:39:06.743 CC module/bdev/xnvme/bdev_xnvme.o 00:39:06.743 CC module/bdev/zone_block/vbdev_zone_block.o 00:39:07.001 CC module/bdev/aio/bdev_aio.o 00:39:07.001 CC module/bdev/aio/bdev_aio_rpc.o 00:39:07.001 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:39:07.001 LIB libspdk_bdev_split.a 00:39:07.001 SO libspdk_bdev_split.so.6.0 00:39:07.001 LIB libspdk_bdev_lvol.a 00:39:07.260 SO libspdk_bdev_lvol.so.6.0 00:39:07.260 SYMLINK libspdk_bdev_split.so 00:39:07.260 CC module/bdev/raid/bdev_raid_rpc.o 00:39:07.260 CC module/bdev/raid/bdev_raid_sb.o 00:39:07.260 LIB libspdk_bdev_passthru.a 00:39:07.260 SYMLINK libspdk_bdev_lvol.so 00:39:07.260 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:39:07.260 CC module/bdev/nvme/bdev_mdns_client.o 00:39:07.260 SO libspdk_bdev_passthru.so.6.0 00:39:07.260 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:39:07.260 SYMLINK libspdk_bdev_passthru.so 00:39:07.260 LIB libspdk_bdev_aio.a 00:39:07.260 SO libspdk_bdev_aio.so.6.0 00:39:07.260 CC module/bdev/ftl/bdev_ftl.o 00:39:07.519 LIB libspdk_bdev_xnvme.a 00:39:07.519 CC module/bdev/nvme/vbdev_opal.o 00:39:07.519 SYMLINK libspdk_bdev_aio.so 00:39:07.519 CC module/bdev/ftl/bdev_ftl_rpc.o 00:39:07.519 SO libspdk_bdev_xnvme.so.3.0 00:39:07.519 CC module/bdev/iscsi/bdev_iscsi.o 00:39:07.519 LIB libspdk_bdev_zone_block.a 00:39:07.519 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:39:07.519 SYMLINK libspdk_bdev_xnvme.so 00:39:07.519 CC module/bdev/nvme/vbdev_opal_rpc.o 00:39:07.519 SO libspdk_bdev_zone_block.so.6.0 00:39:07.519 SYMLINK libspdk_bdev_zone_block.so 00:39:07.519 CC module/bdev/raid/raid0.o 00:39:07.519 CC module/bdev/virtio/bdev_virtio_scsi.o 00:39:07.778 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:39:07.778 CC module/bdev/raid/raid1.o 00:39:07.778 LIB libspdk_bdev_ftl.a 00:39:07.778 CC module/bdev/raid/concat.o 00:39:07.778 SO libspdk_bdev_ftl.so.6.0 00:39:07.778 SYMLINK libspdk_bdev_ftl.so 00:39:07.778 CC module/bdev/virtio/bdev_virtio_blk.o 00:39:07.778 CC module/bdev/virtio/bdev_virtio_rpc.o 00:39:08.037 LIB libspdk_bdev_iscsi.a 00:39:08.037 SO libspdk_bdev_iscsi.so.6.0 00:39:08.037 SYMLINK libspdk_bdev_iscsi.so 00:39:08.037 LIB libspdk_bdev_raid.a 00:39:08.295 SO libspdk_bdev_raid.so.6.0 00:39:08.295 SYMLINK libspdk_bdev_raid.so 00:39:08.295 LIB libspdk_bdev_virtio.a 00:39:08.295 SO libspdk_bdev_virtio.so.6.0 00:39:08.553 SYMLINK libspdk_bdev_virtio.so 00:39:09.927 LIB libspdk_bdev_nvme.a 00:39:10.183 SO libspdk_bdev_nvme.so.7.1 00:39:10.183 SYMLINK libspdk_bdev_nvme.so 00:39:10.747 CC module/event/subsystems/fsdev/fsdev.o 00:39:10.747 CC module/event/subsystems/vmd/vmd.o 00:39:10.747 CC module/event/subsystems/vmd/vmd_rpc.o 00:39:10.747 CC module/event/subsystems/keyring/keyring.o 00:39:10.747 CC module/event/subsystems/iobuf/iobuf.o 00:39:10.747 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:39:10.747 CC module/event/subsystems/scheduler/scheduler.o 00:39:10.747 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:39:10.747 CC module/event/subsystems/sock/sock.o 00:39:11.006 LIB libspdk_event_keyring.a 00:39:11.006 LIB libspdk_event_vhost_blk.a 00:39:11.006 SO libspdk_event_keyring.so.1.0 00:39:11.006 LIB libspdk_event_fsdev.a 00:39:11.006 LIB libspdk_event_sock.a 00:39:11.006 SO libspdk_event_vhost_blk.so.3.0 00:39:11.006 LIB libspdk_event_scheduler.a 00:39:11.006 SO libspdk_event_fsdev.so.1.0 00:39:11.006 SO libspdk_event_sock.so.5.0 00:39:11.006 SO libspdk_event_scheduler.so.4.0 00:39:11.006 LIB libspdk_event_vmd.a 00:39:11.006 SYMLINK libspdk_event_keyring.so 00:39:11.006 LIB libspdk_event_iobuf.a 00:39:11.006 SO libspdk_event_vmd.so.6.0 00:39:11.006 SYMLINK libspdk_event_sock.so 00:39:11.006 SYMLINK libspdk_event_vhost_blk.so 00:39:11.006 SYMLINK libspdk_event_scheduler.so 00:39:11.006 SO libspdk_event_iobuf.so.3.0 00:39:11.006 SYMLINK libspdk_event_fsdev.so 00:39:11.006 SYMLINK libspdk_event_vmd.so 00:39:11.264 SYMLINK libspdk_event_iobuf.so 00:39:11.522 CC module/event/subsystems/accel/accel.o 00:39:11.522 LIB libspdk_event_accel.a 00:39:11.522 SO libspdk_event_accel.so.6.0 00:39:11.779 SYMLINK libspdk_event_accel.so 00:39:12.037 CC module/event/subsystems/bdev/bdev.o 00:39:12.037 LIB libspdk_event_bdev.a 00:39:12.037 SO libspdk_event_bdev.so.6.0 00:39:12.298 SYMLINK libspdk_event_bdev.so 00:39:12.559 CC module/event/subsystems/ublk/ublk.o 00:39:12.559 CC module/event/subsystems/scsi/scsi.o 00:39:12.559 CC module/event/subsystems/nbd/nbd.o 00:39:12.559 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:39:12.559 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:39:12.559 LIB libspdk_event_ublk.a 00:39:12.559 LIB libspdk_event_scsi.a 00:39:12.559 LIB libspdk_event_nbd.a 00:39:12.559 SO libspdk_event_ublk.so.3.0 00:39:12.559 SO libspdk_event_scsi.so.6.0 00:39:12.559 SO libspdk_event_nbd.so.6.0 00:39:12.559 SYMLINK libspdk_event_ublk.so 00:39:12.559 SYMLINK libspdk_event_scsi.so 00:39:12.817 SYMLINK libspdk_event_nbd.so 00:39:12.817 LIB libspdk_event_nvmf.a 00:39:12.817 SO libspdk_event_nvmf.so.6.0 00:39:12.817 SYMLINK libspdk_event_nvmf.so 00:39:12.817 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:39:12.817 CC module/event/subsystems/iscsi/iscsi.o 00:39:13.074 LIB libspdk_event_vhost_scsi.a 00:39:13.075 SO libspdk_event_vhost_scsi.so.3.0 00:39:13.075 LIB libspdk_event_iscsi.a 00:39:13.075 SYMLINK libspdk_event_vhost_scsi.so 00:39:13.075 SO libspdk_event_iscsi.so.6.0 00:39:13.332 SYMLINK libspdk_event_iscsi.so 00:39:13.332 SO libspdk.so.6.0 00:39:13.332 SYMLINK libspdk.so 00:39:13.589 CXX app/trace/trace.o 00:39:13.589 CC app/trace_record/trace_record.o 00:39:13.589 CC app/iscsi_tgt/iscsi_tgt.o 00:39:13.845 CC examples/interrupt_tgt/interrupt_tgt.o 00:39:13.845 CC app/nvmf_tgt/nvmf_main.o 00:39:13.845 CC examples/util/zipf/zipf.o 00:39:13.845 CC test/thread/poller_perf/poller_perf.o 00:39:13.845 CC examples/ioat/perf/perf.o 00:39:13.845 CC test/app/bdev_svc/bdev_svc.o 00:39:13.845 CC test/dma/test_dma/test_dma.o 00:39:13.845 LINK poller_perf 00:39:14.104 LINK iscsi_tgt 00:39:14.104 LINK zipf 00:39:14.104 LINK interrupt_tgt 00:39:14.104 LINK nvmf_tgt 00:39:14.104 LINK ioat_perf 00:39:14.104 LINK bdev_svc 00:39:14.362 LINK spdk_trace_record 00:39:14.362 CC test/app/histogram_perf/histogram_perf.o 00:39:14.362 CC test/app/jsoncat/jsoncat.o 00:39:14.362 CC examples/ioat/verify/verify.o 00:39:14.362 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:39:14.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:39:14.619 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:39:14.619 LINK spdk_trace 00:39:14.619 LINK histogram_perf 00:39:14.619 LINK jsoncat 00:39:14.619 CC examples/thread/thread/thread_ex.o 00:39:14.619 CC app/spdk_tgt/spdk_tgt.o 00:39:14.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:39:14.875 LINK verify 00:39:14.875 CC test/app/stub/stub.o 00:39:14.875 LINK test_dma 00:39:14.875 LINK spdk_tgt 00:39:14.875 CC app/spdk_nvme_perf/perf.o 00:39:14.875 CC app/spdk_lspci/spdk_lspci.o 00:39:15.133 LINK spdk_lspci 00:39:15.133 LINK stub 00:39:15.133 LINK thread 00:39:15.133 TEST_HEADER include/spdk/accel.h 00:39:15.133 TEST_HEADER include/spdk/accel_module.h 00:39:15.133 TEST_HEADER include/spdk/assert.h 00:39:15.391 TEST_HEADER include/spdk/barrier.h 00:39:15.391 TEST_HEADER include/spdk/base64.h 00:39:15.391 TEST_HEADER include/spdk/bdev.h 00:39:15.391 TEST_HEADER include/spdk/bdev_module.h 00:39:15.391 TEST_HEADER include/spdk/bdev_zone.h 00:39:15.391 TEST_HEADER include/spdk/bit_array.h 00:39:15.391 TEST_HEADER include/spdk/bit_pool.h 00:39:15.391 CC examples/sock/hello_world/hello_sock.o 00:39:15.391 TEST_HEADER include/spdk/blob_bdev.h 00:39:15.391 TEST_HEADER include/spdk/blobfs_bdev.h 00:39:15.391 TEST_HEADER include/spdk/blobfs.h 00:39:15.391 TEST_HEADER include/spdk/blob.h 00:39:15.391 TEST_HEADER include/spdk/conf.h 00:39:15.391 TEST_HEADER include/spdk/config.h 00:39:15.391 TEST_HEADER include/spdk/cpuset.h 00:39:15.391 TEST_HEADER include/spdk/crc16.h 00:39:15.391 TEST_HEADER include/spdk/crc32.h 00:39:15.391 TEST_HEADER include/spdk/crc64.h 00:39:15.391 TEST_HEADER include/spdk/dif.h 00:39:15.391 TEST_HEADER include/spdk/dma.h 00:39:15.391 CC app/spdk_nvme_identify/identify.o 00:39:15.391 TEST_HEADER include/spdk/endian.h 00:39:15.391 TEST_HEADER include/spdk/env_dpdk.h 00:39:15.391 TEST_HEADER include/spdk/env.h 00:39:15.391 TEST_HEADER include/spdk/event.h 00:39:15.391 LINK vhost_fuzz 00:39:15.391 TEST_HEADER include/spdk/fd_group.h 00:39:15.391 TEST_HEADER include/spdk/fd.h 00:39:15.391 TEST_HEADER include/spdk/file.h 00:39:15.391 TEST_HEADER include/spdk/fsdev.h 00:39:15.391 TEST_HEADER include/spdk/fsdev_module.h 00:39:15.391 TEST_HEADER include/spdk/ftl.h 00:39:15.391 TEST_HEADER include/spdk/fuse_dispatcher.h 00:39:15.391 TEST_HEADER include/spdk/gpt_spec.h 00:39:15.391 TEST_HEADER include/spdk/hexlify.h 00:39:15.391 TEST_HEADER include/spdk/histogram_data.h 00:39:15.391 TEST_HEADER include/spdk/idxd.h 00:39:15.391 TEST_HEADER include/spdk/idxd_spec.h 00:39:15.391 TEST_HEADER include/spdk/init.h 00:39:15.391 TEST_HEADER include/spdk/ioat.h 00:39:15.391 TEST_HEADER include/spdk/ioat_spec.h 00:39:15.391 TEST_HEADER include/spdk/iscsi_spec.h 00:39:15.391 TEST_HEADER include/spdk/json.h 00:39:15.391 TEST_HEADER include/spdk/jsonrpc.h 00:39:15.391 TEST_HEADER include/spdk/keyring.h 00:39:15.391 TEST_HEADER include/spdk/keyring_module.h 00:39:15.391 TEST_HEADER include/spdk/likely.h 00:39:15.391 TEST_HEADER include/spdk/log.h 00:39:15.391 TEST_HEADER include/spdk/lvol.h 00:39:15.391 TEST_HEADER include/spdk/md5.h 00:39:15.391 TEST_HEADER include/spdk/memory.h 00:39:15.391 TEST_HEADER include/spdk/mmio.h 00:39:15.391 TEST_HEADER include/spdk/nbd.h 00:39:15.391 TEST_HEADER include/spdk/net.h 00:39:15.391 TEST_HEADER include/spdk/notify.h 00:39:15.391 TEST_HEADER include/spdk/nvme.h 00:39:15.391 TEST_HEADER include/spdk/nvme_intel.h 00:39:15.391 TEST_HEADER include/spdk/nvme_ocssd.h 00:39:15.391 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:39:15.392 TEST_HEADER include/spdk/nvme_spec.h 00:39:15.392 TEST_HEADER include/spdk/nvme_zns.h 00:39:15.392 LINK nvme_fuzz 00:39:15.392 TEST_HEADER include/spdk/nvmf_cmd.h 00:39:15.392 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:39:15.392 TEST_HEADER include/spdk/nvmf.h 00:39:15.392 TEST_HEADER include/spdk/nvmf_spec.h 00:39:15.392 TEST_HEADER include/spdk/nvmf_transport.h 00:39:15.392 TEST_HEADER include/spdk/opal.h 00:39:15.392 TEST_HEADER include/spdk/opal_spec.h 00:39:15.392 TEST_HEADER include/spdk/pci_ids.h 00:39:15.392 TEST_HEADER include/spdk/pipe.h 00:39:15.392 TEST_HEADER include/spdk/queue.h 00:39:15.392 TEST_HEADER include/spdk/reduce.h 00:39:15.392 TEST_HEADER include/spdk/rpc.h 00:39:15.392 TEST_HEADER include/spdk/scheduler.h 00:39:15.392 CC app/spdk_nvme_discover/discovery_aer.o 00:39:15.392 TEST_HEADER include/spdk/scsi.h 00:39:15.392 TEST_HEADER include/spdk/scsi_spec.h 00:39:15.392 TEST_HEADER include/spdk/sock.h 00:39:15.392 TEST_HEADER include/spdk/stdinc.h 00:39:15.392 TEST_HEADER include/spdk/string.h 00:39:15.392 TEST_HEADER include/spdk/thread.h 00:39:15.392 TEST_HEADER include/spdk/trace.h 00:39:15.392 TEST_HEADER include/spdk/trace_parser.h 00:39:15.392 TEST_HEADER include/spdk/tree.h 00:39:15.392 TEST_HEADER include/spdk/ublk.h 00:39:15.392 TEST_HEADER include/spdk/util.h 00:39:15.392 TEST_HEADER include/spdk/uuid.h 00:39:15.392 TEST_HEADER include/spdk/version.h 00:39:15.392 TEST_HEADER include/spdk/vfio_user_pci.h 00:39:15.649 TEST_HEADER include/spdk/vfio_user_spec.h 00:39:15.649 TEST_HEADER include/spdk/vhost.h 00:39:15.649 TEST_HEADER include/spdk/vmd.h 00:39:15.649 TEST_HEADER include/spdk/xor.h 00:39:15.649 TEST_HEADER include/spdk/zipf.h 00:39:15.649 CXX test/cpp_headers/accel_module.o 00:39:15.649 CXX test/cpp_headers/accel.o 00:39:15.649 CC app/spdk_top/spdk_top.o 00:39:15.649 LINK hello_sock 00:39:15.649 CXX test/cpp_headers/assert.o 00:39:15.649 LINK spdk_nvme_discover 00:39:15.906 CC test/env/mem_callbacks/mem_callbacks.o 00:39:15.906 CXX test/cpp_headers/barrier.o 00:39:15.906 CC examples/idxd/perf/perf.o 00:39:15.906 CC examples/vmd/lsvmd/lsvmd.o 00:39:16.164 CXX test/cpp_headers/base64.o 00:39:16.164 LINK lsvmd 00:39:16.164 CC examples/vmd/led/led.o 00:39:16.164 CC examples/fsdev/hello_world/hello_fsdev.o 00:39:16.164 LINK spdk_nvme_perf 00:39:16.421 CXX test/cpp_headers/bdev.o 00:39:16.421 LINK idxd_perf 00:39:16.421 CC test/env/vtophys/vtophys.o 00:39:16.421 LINK led 00:39:16.710 CXX test/cpp_headers/bdev_module.o 00:39:16.710 LINK mem_callbacks 00:39:16.710 CXX test/cpp_headers/bdev_zone.o 00:39:16.710 LINK vtophys 00:39:16.710 CXX test/cpp_headers/bit_array.o 00:39:16.710 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:39:16.710 LINK hello_fsdev 00:39:16.710 LINK iscsi_fuzz 00:39:16.710 CXX test/cpp_headers/bit_pool.o 00:39:16.968 LINK spdk_top 00:39:16.968 CXX test/cpp_headers/blob_bdev.o 00:39:16.968 LINK env_dpdk_post_init 00:39:16.968 CC app/vhost/vhost.o 00:39:16.968 CC app/spdk_dd/spdk_dd.o 00:39:17.225 CC app/fio/nvme/fio_plugin.o 00:39:17.225 CC test/env/memory/memory_ut.o 00:39:17.225 CC app/fio/bdev/fio_plugin.o 00:39:17.225 CC test/env/pci/pci_ut.o 00:39:17.225 CXX test/cpp_headers/blobfs_bdev.o 00:39:17.225 LINK spdk_nvme_identify 00:39:17.225 LINK vhost 00:39:17.566 CXX test/cpp_headers/blobfs.o 00:39:17.566 CC examples/accel/perf/accel_perf.o 00:39:17.566 CC examples/blob/hello_world/hello_blob.o 00:39:17.566 LINK spdk_dd 00:39:17.566 CXX test/cpp_headers/blob.o 00:39:17.566 CXX test/cpp_headers/conf.o 00:39:17.960 LINK pci_ut 00:39:17.960 LINK hello_blob 00:39:17.960 CXX test/cpp_headers/config.o 00:39:17.960 LINK spdk_bdev 00:39:17.960 LINK spdk_nvme 00:39:17.960 CXX test/cpp_headers/cpuset.o 00:39:17.960 CC examples/nvme/hello_world/hello_world.o 00:39:17.960 CC test/event/event_perf/event_perf.o 00:39:17.960 CC test/nvme/aer/aer.o 00:39:17.960 CXX test/cpp_headers/crc16.o 00:39:17.960 CC test/nvme/reset/reset.o 00:39:17.960 CC test/nvme/sgl/sgl.o 00:39:18.219 LINK accel_perf 00:39:18.219 CC examples/nvme/reconnect/reconnect.o 00:39:18.219 LINK hello_world 00:39:18.219 CC examples/blob/cli/blobcli.o 00:39:18.219 LINK event_perf 00:39:18.219 CXX test/cpp_headers/crc32.o 00:39:18.476 CC test/event/reactor/reactor.o 00:39:18.476 LINK aer 00:39:18.476 LINK reset 00:39:18.476 LINK sgl 00:39:18.476 CXX test/cpp_headers/crc64.o 00:39:18.476 CC test/event/reactor_perf/reactor_perf.o 00:39:18.476 CC test/event/app_repeat/app_repeat.o 00:39:18.476 LINK reactor 00:39:18.476 LINK reconnect 00:39:18.476 LINK memory_ut 00:39:18.476 LINK reactor_perf 00:39:18.476 CXX test/cpp_headers/dif.o 00:39:18.734 LINK app_repeat 00:39:18.734 CC test/nvme/e2edp/nvme_dp.o 00:39:18.734 CC test/nvme/overhead/overhead.o 00:39:18.734 CC test/rpc_client/rpc_client_test.o 00:39:18.734 CXX test/cpp_headers/dma.o 00:39:18.734 CXX test/cpp_headers/endian.o 00:39:18.734 LINK blobcli 00:39:18.734 CC examples/nvme/nvme_manage/nvme_manage.o 00:39:18.992 LINK rpc_client_test 00:39:18.992 CC examples/nvme/arbitration/arbitration.o 00:39:18.992 CC test/event/scheduler/scheduler.o 00:39:18.992 CXX test/cpp_headers/env_dpdk.o 00:39:18.992 CXX test/cpp_headers/env.o 00:39:18.992 LINK nvme_dp 00:39:18.992 CC test/nvme/err_injection/err_injection.o 00:39:18.992 LINK overhead 00:39:18.992 CC examples/bdev/hello_world/hello_bdev.o 00:39:18.992 CXX test/cpp_headers/event.o 00:39:19.252 CXX test/cpp_headers/fd_group.o 00:39:19.252 LINK err_injection 00:39:19.252 LINK scheduler 00:39:19.252 CXX test/cpp_headers/fd.o 00:39:19.252 CC test/nvme/startup/startup.o 00:39:19.252 CC examples/bdev/bdevperf/bdevperf.o 00:39:19.252 LINK arbitration 00:39:19.252 CC test/nvme/reserve/reserve.o 00:39:19.252 LINK hello_bdev 00:39:19.509 CXX test/cpp_headers/file.o 00:39:19.509 LINK nvme_manage 00:39:19.509 CC examples/nvme/hotplug/hotplug.o 00:39:19.509 LINK startup 00:39:19.509 CC examples/nvme/cmb_copy/cmb_copy.o 00:39:19.509 CC examples/nvme/abort/abort.o 00:39:19.509 LINK reserve 00:39:19.509 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:39:19.767 CC test/nvme/simple_copy/simple_copy.o 00:39:19.767 CXX test/cpp_headers/fsdev.o 00:39:19.767 LINK cmb_copy 00:39:19.767 LINK hotplug 00:39:19.767 CC test/nvme/connect_stress/connect_stress.o 00:39:19.767 LINK pmr_persistence 00:39:19.767 CXX test/cpp_headers/fsdev_module.o 00:39:19.767 CC test/accel/dif/dif.o 00:39:19.767 CXX test/cpp_headers/ftl.o 00:39:20.026 CC test/blobfs/mkfs/mkfs.o 00:39:20.026 LINK simple_copy 00:39:20.026 LINK abort 00:39:20.026 LINK connect_stress 00:39:20.026 CC test/nvme/boot_partition/boot_partition.o 00:39:20.026 CXX test/cpp_headers/fuse_dispatcher.o 00:39:20.026 LINK mkfs 00:39:20.284 CC test/nvme/compliance/nvme_compliance.o 00:39:20.284 CXX test/cpp_headers/gpt_spec.o 00:39:20.284 CC test/lvol/esnap/esnap.o 00:39:20.284 CC test/nvme/fused_ordering/fused_ordering.o 00:39:20.284 CXX test/cpp_headers/hexlify.o 00:39:20.284 LINK boot_partition 00:39:20.284 CC test/nvme/doorbell_aers/doorbell_aers.o 00:39:20.284 LINK bdevperf 00:39:20.542 CC test/nvme/fdp/fdp.o 00:39:20.542 CC test/nvme/cuse/cuse.o 00:39:20.542 CXX test/cpp_headers/histogram_data.o 00:39:20.542 LINK fused_ordering 00:39:20.542 CXX test/cpp_headers/idxd.o 00:39:20.542 LINK doorbell_aers 00:39:20.542 LINK nvme_compliance 00:39:20.799 CXX test/cpp_headers/idxd_spec.o 00:39:20.799 CXX test/cpp_headers/init.o 00:39:20.799 CXX test/cpp_headers/ioat.o 00:39:20.799 LINK dif 00:39:20.799 CXX test/cpp_headers/ioat_spec.o 00:39:20.799 CXX test/cpp_headers/iscsi_spec.o 00:39:20.799 CC examples/nvmf/nvmf/nvmf.o 00:39:20.799 CXX test/cpp_headers/json.o 00:39:20.799 CXX test/cpp_headers/jsonrpc.o 00:39:20.799 LINK fdp 00:39:21.058 CXX test/cpp_headers/keyring.o 00:39:21.058 CXX test/cpp_headers/keyring_module.o 00:39:21.058 CXX test/cpp_headers/likely.o 00:39:21.058 CXX test/cpp_headers/log.o 00:39:21.058 CXX test/cpp_headers/lvol.o 00:39:21.058 CXX test/cpp_headers/md5.o 00:39:21.058 CXX test/cpp_headers/memory.o 00:39:21.058 CXX test/cpp_headers/mmio.o 00:39:21.058 CXX test/cpp_headers/nbd.o 00:39:21.058 CC test/bdev/bdevio/bdevio.o 00:39:21.316 CXX test/cpp_headers/net.o 00:39:21.316 LINK nvmf 00:39:21.316 CXX test/cpp_headers/notify.o 00:39:21.316 CXX test/cpp_headers/nvme.o 00:39:21.316 CXX test/cpp_headers/nvme_intel.o 00:39:21.316 CXX test/cpp_headers/nvme_ocssd.o 00:39:21.316 CXX test/cpp_headers/nvme_ocssd_spec.o 00:39:21.316 CXX test/cpp_headers/nvme_spec.o 00:39:21.316 CXX test/cpp_headers/nvme_zns.o 00:39:21.574 CXX test/cpp_headers/nvmf_cmd.o 00:39:21.574 CXX test/cpp_headers/nvmf_fc_spec.o 00:39:21.574 CXX test/cpp_headers/nvmf.o 00:39:21.574 CXX test/cpp_headers/nvmf_spec.o 00:39:21.574 CXX test/cpp_headers/nvmf_transport.o 00:39:21.574 CXX test/cpp_headers/opal.o 00:39:21.574 LINK bdevio 00:39:21.574 CXX test/cpp_headers/opal_spec.o 00:39:21.574 CXX test/cpp_headers/pci_ids.o 00:39:21.574 CXX test/cpp_headers/pipe.o 00:39:21.574 CXX test/cpp_headers/queue.o 00:39:21.832 CXX test/cpp_headers/reduce.o 00:39:21.832 CXX test/cpp_headers/rpc.o 00:39:21.832 CXX test/cpp_headers/scheduler.o 00:39:21.832 CXX test/cpp_headers/scsi.o 00:39:21.832 CXX test/cpp_headers/scsi_spec.o 00:39:21.832 CXX test/cpp_headers/sock.o 00:39:21.832 CXX test/cpp_headers/stdinc.o 00:39:21.832 CXX test/cpp_headers/string.o 00:39:21.832 CXX test/cpp_headers/thread.o 00:39:21.832 CXX test/cpp_headers/trace.o 00:39:22.091 CXX test/cpp_headers/trace_parser.o 00:39:22.091 CXX test/cpp_headers/tree.o 00:39:22.091 CXX test/cpp_headers/ublk.o 00:39:22.091 CXX test/cpp_headers/util.o 00:39:22.091 CXX test/cpp_headers/uuid.o 00:39:22.091 CXX test/cpp_headers/version.o 00:39:22.091 CXX test/cpp_headers/vfio_user_pci.o 00:39:22.091 CXX test/cpp_headers/vfio_user_spec.o 00:39:22.091 CXX test/cpp_headers/vhost.o 00:39:22.091 CXX test/cpp_headers/vmd.o 00:39:22.091 LINK cuse 00:39:22.091 CXX test/cpp_headers/xor.o 00:39:22.091 CXX test/cpp_headers/zipf.o 00:39:27.356 LINK esnap 00:39:27.923 00:39:27.923 real 2m0.750s 00:39:27.923 user 11m30.468s 00:39:27.923 sys 2m4.293s 00:39:27.923 09:50:34 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:39:27.923 09:50:34 make -- common/autotest_common.sh@10 -- $ set +x 00:39:27.923 ************************************ 00:39:27.923 END TEST make 00:39:27.923 ************************************ 00:39:27.923 09:50:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:39:27.924 09:50:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:27.924 09:50:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:27.924 09:50:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:27.924 09:50:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:39:27.924 09:50:34 -- pm/common@44 -- $ pid=5450 00:39:27.924 09:50:34 -- pm/common@50 -- $ kill -TERM 5450 00:39:27.924 09:50:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:27.924 09:50:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:39:27.924 09:50:34 -- pm/common@44 -- $ pid=5452 00:39:27.924 09:50:34 -- pm/common@50 -- $ kill -TERM 5452 00:39:27.924 09:50:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:39:27.924 09:50:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:39:27.924 09:50:34 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:27.924 09:50:34 -- common/autotest_common.sh@1711 -- # lcov --version 00:39:27.924 09:50:34 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:27.924 09:50:34 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:27.924 09:50:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:27.924 09:50:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:27.924 09:50:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:27.924 09:50:34 -- scripts/common.sh@336 -- # IFS=.-: 00:39:27.924 09:50:34 -- scripts/common.sh@336 -- # read -ra ver1 00:39:27.924 09:50:34 -- scripts/common.sh@337 -- # IFS=.-: 00:39:27.924 09:50:34 -- scripts/common.sh@337 -- # read -ra ver2 00:39:27.924 09:50:34 -- scripts/common.sh@338 -- # local 'op=<' 00:39:27.924 09:50:34 -- scripts/common.sh@340 -- # ver1_l=2 00:39:27.924 09:50:34 -- scripts/common.sh@341 -- # ver2_l=1 00:39:27.924 09:50:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:27.924 09:50:34 -- scripts/common.sh@344 -- # case "$op" in 00:39:27.924 09:50:34 -- scripts/common.sh@345 -- # : 1 00:39:27.924 09:50:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:27.924 09:50:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:27.924 09:50:34 -- scripts/common.sh@365 -- # decimal 1 00:39:28.183 09:50:34 -- scripts/common.sh@353 -- # local d=1 00:39:28.183 09:50:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.183 09:50:34 -- scripts/common.sh@355 -- # echo 1 00:39:28.183 09:50:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.183 09:50:34 -- scripts/common.sh@366 -- # decimal 2 00:39:28.183 09:50:34 -- scripts/common.sh@353 -- # local d=2 00:39:28.183 09:50:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.183 09:50:34 -- scripts/common.sh@355 -- # echo 2 00:39:28.183 09:50:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.183 09:50:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.183 09:50:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.183 09:50:34 -- scripts/common.sh@368 -- # return 0 00:39:28.183 09:50:34 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.183 09:50:34 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:28.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.183 --rc genhtml_branch_coverage=1 00:39:28.183 --rc genhtml_function_coverage=1 00:39:28.183 --rc genhtml_legend=1 00:39:28.183 --rc geninfo_all_blocks=1 00:39:28.183 --rc geninfo_unexecuted_blocks=1 00:39:28.183 00:39:28.183 ' 00:39:28.183 09:50:34 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:28.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.183 --rc genhtml_branch_coverage=1 00:39:28.183 --rc genhtml_function_coverage=1 00:39:28.183 --rc genhtml_legend=1 00:39:28.183 --rc geninfo_all_blocks=1 00:39:28.183 --rc geninfo_unexecuted_blocks=1 00:39:28.183 00:39:28.183 ' 00:39:28.183 09:50:34 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:28.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.183 --rc genhtml_branch_coverage=1 00:39:28.183 --rc genhtml_function_coverage=1 00:39:28.183 --rc genhtml_legend=1 00:39:28.183 --rc geninfo_all_blocks=1 00:39:28.183 --rc geninfo_unexecuted_blocks=1 00:39:28.183 00:39:28.183 ' 00:39:28.183 09:50:34 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:28.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.183 --rc genhtml_branch_coverage=1 00:39:28.183 --rc genhtml_function_coverage=1 00:39:28.183 --rc genhtml_legend=1 00:39:28.183 --rc geninfo_all_blocks=1 00:39:28.183 --rc geninfo_unexecuted_blocks=1 00:39:28.183 00:39:28.183 ' 00:39:28.183 09:50:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:28.183 09:50:34 -- nvmf/common.sh@7 -- # uname -s 00:39:28.183 09:50:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:28.183 09:50:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:28.183 09:50:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:28.183 09:50:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:28.183 09:50:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:28.183 09:50:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:28.183 09:50:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:28.183 09:50:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:28.183 09:50:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:28.183 09:50:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:28.183 09:50:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a580454-7778-4a91-9746-e8f5310fed33 00:39:28.183 09:50:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=1a580454-7778-4a91-9746-e8f5310fed33 00:39:28.183 09:50:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:28.183 09:50:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:28.183 09:50:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:39:28.183 09:50:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:28.183 09:50:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:28.183 09:50:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:39:28.183 09:50:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:28.183 09:50:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:28.183 09:50:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:28.183 09:50:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.183 09:50:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.183 09:50:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.183 09:50:35 -- paths/export.sh@5 -- # export PATH 00:39:28.183 09:50:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.183 09:50:35 -- nvmf/common.sh@51 -- # : 0 00:39:28.183 09:50:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:28.183 09:50:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:28.183 09:50:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:28.183 09:50:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:28.183 09:50:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:28.183 09:50:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:28.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:28.183 09:50:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:28.183 09:50:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:28.183 09:50:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:28.183 09:50:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:39:28.183 09:50:35 -- spdk/autotest.sh@32 -- # uname -s 00:39:28.183 09:50:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:39:28.183 09:50:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:39:28.183 09:50:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:39:28.183 09:50:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:39:28.183 09:50:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:39:28.183 09:50:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:39:28.183 09:50:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:39:28.183 09:50:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:39:28.183 09:50:35 -- spdk/autotest.sh@48 -- # udevadm_pid=55220 00:39:28.183 09:50:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:39:28.183 09:50:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:39:28.183 09:50:35 -- pm/common@17 -- # local monitor 00:39:28.183 09:50:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:39:28.183 09:50:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:39:28.183 09:50:35 -- pm/common@25 -- # sleep 1 00:39:28.183 09:50:35 -- pm/common@21 -- # date +%s 00:39:28.183 09:50:35 -- pm/common@21 -- # date +%s 00:39:28.183 09:50:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733737835 00:39:28.183 09:50:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733737835 00:39:28.183 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733737835_collect-vmstat.pm.log 00:39:28.183 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733737835_collect-cpu-load.pm.log 00:39:29.118 09:50:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:39:29.118 09:50:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:39:29.118 09:50:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:29.118 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:39:29.118 09:50:36 -- spdk/autotest.sh@59 -- # create_test_list 00:39:29.118 09:50:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:39:29.118 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:39:29.118 09:50:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:39:29.118 09:50:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:39:29.118 09:50:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:39:29.118 09:50:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:39:29.118 09:50:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:39:29.118 09:50:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:39:29.118 09:50:36 -- common/autotest_common.sh@1457 -- # uname 00:39:29.118 09:50:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:39:29.118 09:50:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:39:29.118 09:50:36 -- common/autotest_common.sh@1477 -- # uname 00:39:29.118 09:50:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:39:29.118 09:50:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:39:29.118 09:50:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:39:29.377 lcov: LCOV version 1.15 00:39:29.377 09:50:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:39:47.456 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:39:47.456 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:40:05.606 09:51:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:40:05.607 09:51:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:05.607 09:51:09 -- common/autotest_common.sh@10 -- # set +x 00:40:05.607 09:51:09 -- spdk/autotest.sh@78 -- # rm -f 00:40:05.607 09:51:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:05.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:05.607 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:40:05.607 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:40:05.607 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:40:05.607 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:40:05.607 09:51:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:40:05.607 09:51:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:40:05.607 09:51:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:40:05.607 09:51:10 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:40:05.607 09:51:10 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:40:05.607 09:51:10 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:40:05.607 09:51:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:40:05.607 09:51:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:40:05.607 09:51:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:40:05.607 09:51:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:40:05.607 09:51:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:40:05.607 09:51:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:40:05.607 09:51:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:40:05.607 09:51:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:40:05.607 09:51:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:05.607 09:51:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:40:05.607 09:51:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:40:05.607 09:51:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:05.607 09:51:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:40:05.607 09:51:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:40:05.607 09:51:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:40:05.607 09:51:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:40:05.607 09:51:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:40:05.607 09:51:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:40:05.607 No valid GPT data, bailing 00:40:05.607 09:51:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:05.607 09:51:10 -- scripts/common.sh@394 -- # pt= 00:40:05.607 09:51:10 -- scripts/common.sh@395 -- # return 1 00:40:05.607 09:51:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:40:05.607 1+0 records in 00:40:05.607 1+0 records out 00:40:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157647 s, 66.5 MB/s 00:40:05.607 09:51:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:40:05.607 09:51:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:40:05.607 09:51:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:40:05.607 09:51:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:40:05.607 09:51:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:40:05.607 No valid GPT data, bailing 00:40:05.607 09:51:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:40:05.607 09:51:10 -- scripts/common.sh@394 -- # pt= 00:40:05.607 09:51:10 -- scripts/common.sh@395 -- # return 1 00:40:05.607 09:51:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:40:05.607 1+0 records in 00:40:05.607 1+0 records out 00:40:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499689 s, 210 MB/s 00:40:05.607 09:51:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:40:05.607 09:51:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:40:05.607 09:51:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:40:05.607 09:51:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:40:05.607 09:51:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:40:05.607 No valid GPT data, bailing 00:40:05.607 09:51:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:40:05.607 09:51:10 -- scripts/common.sh@394 -- # pt= 00:40:05.607 09:51:10 -- scripts/common.sh@395 -- # return 1 00:40:05.607 09:51:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:40:05.607 1+0 records in 00:40:05.607 1+0 records out 00:40:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458026 s, 229 MB/s 00:40:05.607 09:51:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:40:05.607 09:51:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:40:05.607 09:51:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:40:05.607 09:51:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:40:05.607 09:51:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:40:05.607 No valid GPT data, bailing 00:40:05.607 09:51:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:40:05.607 09:51:11 -- scripts/common.sh@394 -- # pt= 00:40:05.607 09:51:11 -- scripts/common.sh@395 -- # return 1 00:40:05.607 09:51:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:40:05.607 1+0 records in 00:40:05.607 1+0 records out 00:40:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052074 s, 201 MB/s 00:40:05.607 09:51:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:40:05.607 09:51:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:40:05.607 09:51:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:40:05.607 09:51:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:40:05.607 09:51:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:40:05.607 No valid GPT data, bailing 00:40:05.607 09:51:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:40:05.607 09:51:11 -- scripts/common.sh@394 -- # pt= 00:40:05.607 09:51:11 -- scripts/common.sh@395 -- # return 1 00:40:05.607 09:51:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:40:05.607 1+0 records in 00:40:05.607 1+0 records out 00:40:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463467 s, 226 MB/s 00:40:05.607 09:51:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:40:05.607 09:51:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:40:05.607 09:51:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:40:05.607 09:51:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:40:05.607 09:51:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:40:05.607 No valid GPT data, bailing 00:40:05.607 09:51:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:40:05.607 09:51:11 -- scripts/common.sh@394 -- # pt= 00:40:05.607 09:51:11 -- scripts/common.sh@395 -- # return 1 00:40:05.607 09:51:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:40:05.607 1+0 records in 00:40:05.607 1+0 records out 00:40:05.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494369 s, 212 MB/s 00:40:05.607 09:51:11 -- spdk/autotest.sh@105 -- # sync 00:40:05.607 09:51:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:40:05.607 09:51:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:40:05.607 09:51:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:40:06.543 09:51:13 -- spdk/autotest.sh@111 -- # uname -s 00:40:06.543 09:51:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:40:06.543 09:51:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:40:06.543 09:51:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:40:07.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:07.369 Hugepages 00:40:07.369 node hugesize free / total 00:40:07.369 node0 1048576kB 0 / 0 00:40:07.369 node0 2048kB 0 / 0 00:40:07.369 00:40:07.369 Type BDF Vendor Device NUMA Driver Device Block devices 00:40:07.369 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:40:07.629 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:40:07.629 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:40:07.629 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:40:07.887 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:40:07.887 09:51:14 -- spdk/autotest.sh@117 -- # uname -s 00:40:07.887 09:51:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:40:07.887 09:51:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:40:07.887 09:51:14 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:08.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:09.021 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:40:09.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:09.021 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:09.021 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:40:09.021 09:51:15 -- common/autotest_common.sh@1517 -- # sleep 1 00:40:09.955 09:51:16 -- common/autotest_common.sh@1518 -- # bdfs=() 00:40:09.955 09:51:16 -- common/autotest_common.sh@1518 -- # local bdfs 00:40:09.955 09:51:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:40:09.955 09:51:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:40:09.955 09:51:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:09.955 09:51:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:40:09.955 09:51:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:09.955 09:51:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:09.955 09:51:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:10.215 09:51:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:40:10.215 09:51:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:40:10.215 09:51:17 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:10.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:10.732 Waiting for block devices as requested 00:40:10.732 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:10.732 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:10.732 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:40:10.991 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:40:16.438 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:40:16.438 09:51:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:40:16.438 09:51:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:40:16.438 09:51:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:40:16.438 09:51:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:40:16.438 09:51:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:40:16.438 09:51:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:40:16.438 09:51:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:40:16.438 09:51:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:40:16.438 09:51:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:40:16.438 09:51:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:40:16.438 09:51:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1543 -- # continue 00:40:16.438 09:51:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:40:16.438 09:51:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:40:16.438 09:51:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:40:16.438 09:51:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:40:16.438 09:51:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:40:16.438 09:51:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:40:16.438 09:51:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:40:16.438 09:51:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:40:16.438 09:51:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1543 -- # continue 00:40:16.438 09:51:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:40:16.438 09:51:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:40:16.438 09:51:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:40:16.438 09:51:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:40:16.438 09:51:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:40:16.438 09:51:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:40:16.438 09:51:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1543 -- # continue 00:40:16.438 09:51:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:40:16.438 09:51:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:40:16.438 09:51:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:40:16.438 09:51:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:40:16.438 09:51:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:40:16.438 09:51:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:40:16.438 09:51:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:40:16.438 09:51:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:40:16.438 09:51:23 -- common/autotest_common.sh@1543 -- # continue 00:40:16.438 09:51:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:40:16.438 09:51:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:16.438 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:40:16.438 09:51:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:40:16.438 09:51:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:16.438 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:40:16.438 09:51:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:16.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:17.262 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:17.262 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:40:17.262 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:17.521 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:40:17.521 09:51:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:40:17.521 09:51:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:17.521 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:40:17.521 09:51:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:40:17.521 09:51:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:40:17.521 09:51:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:40:17.521 09:51:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:40:17.521 09:51:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:40:17.521 09:51:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:40:17.521 09:51:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:40:17.521 09:51:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:40:17.521 09:51:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:17.521 09:51:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:40:17.521 09:51:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:17.521 09:51:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:17.521 09:51:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:17.521 09:51:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:40:17.521 09:51:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:40:17.521 09:51:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:40:17.521 09:51:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:40:17.521 09:51:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:40:17.521 09:51:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:40:17.521 09:51:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:40:17.521 09:51:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:40:17.521 09:51:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:40:17.521 09:51:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:40:17.521 09:51:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:40:17.521 09:51:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:40:17.521 09:51:24 -- common/autotest_common.sh@1572 -- # return 0 00:40:17.521 09:51:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:40:17.521 09:51:24 -- common/autotest_common.sh@1580 -- # return 0 00:40:17.521 09:51:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:40:17.521 09:51:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:40:17.521 09:51:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:40:17.521 09:51:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:40:17.521 09:51:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:40:17.521 09:51:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:17.521 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:40:17.521 09:51:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:40:17.521 09:51:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:40:17.521 09:51:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:17.521 09:51:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.521 09:51:24 -- common/autotest_common.sh@10 -- # set +x 00:40:17.521 ************************************ 00:40:17.521 START TEST env 00:40:17.521 ************************************ 00:40:17.521 09:51:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:40:17.780 * Looking for test storage... 00:40:17.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1711 -- # lcov --version 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:17.780 09:51:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.780 09:51:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.780 09:51:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.780 09:51:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.780 09:51:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.780 09:51:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.780 09:51:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.780 09:51:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.780 09:51:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.780 09:51:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.780 09:51:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.780 09:51:24 env -- scripts/common.sh@344 -- # case "$op" in 00:40:17.780 09:51:24 env -- scripts/common.sh@345 -- # : 1 00:40:17.780 09:51:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.780 09:51:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.780 09:51:24 env -- scripts/common.sh@365 -- # decimal 1 00:40:17.780 09:51:24 env -- scripts/common.sh@353 -- # local d=1 00:40:17.780 09:51:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.780 09:51:24 env -- scripts/common.sh@355 -- # echo 1 00:40:17.780 09:51:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.780 09:51:24 env -- scripts/common.sh@366 -- # decimal 2 00:40:17.780 09:51:24 env -- scripts/common.sh@353 -- # local d=2 00:40:17.780 09:51:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.780 09:51:24 env -- scripts/common.sh@355 -- # echo 2 00:40:17.780 09:51:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.780 09:51:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.780 09:51:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.780 09:51:24 env -- scripts/common.sh@368 -- # return 0 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.780 --rc genhtml_branch_coverage=1 00:40:17.780 --rc genhtml_function_coverage=1 00:40:17.780 --rc genhtml_legend=1 00:40:17.780 --rc geninfo_all_blocks=1 00:40:17.780 --rc geninfo_unexecuted_blocks=1 00:40:17.780 00:40:17.780 ' 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.780 --rc genhtml_branch_coverage=1 00:40:17.780 --rc genhtml_function_coverage=1 00:40:17.780 --rc genhtml_legend=1 00:40:17.780 --rc geninfo_all_blocks=1 00:40:17.780 --rc geninfo_unexecuted_blocks=1 00:40:17.780 00:40:17.780 ' 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.780 --rc genhtml_branch_coverage=1 00:40:17.780 --rc genhtml_function_coverage=1 00:40:17.780 --rc genhtml_legend=1 00:40:17.780 --rc geninfo_all_blocks=1 00:40:17.780 --rc geninfo_unexecuted_blocks=1 00:40:17.780 00:40:17.780 ' 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.780 --rc genhtml_branch_coverage=1 00:40:17.780 --rc genhtml_function_coverage=1 00:40:17.780 --rc genhtml_legend=1 00:40:17.780 --rc geninfo_all_blocks=1 00:40:17.780 --rc geninfo_unexecuted_blocks=1 00:40:17.780 00:40:17.780 ' 00:40:17.780 09:51:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:17.780 09:51:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.780 09:51:24 env -- common/autotest_common.sh@10 -- # set +x 00:40:17.780 ************************************ 00:40:17.780 START TEST env_memory 00:40:17.780 ************************************ 00:40:17.780 09:51:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:40:17.780 00:40:17.780 00:40:17.780 CUnit - A unit testing framework for C - Version 2.1-3 00:40:17.780 http://cunit.sourceforge.net/ 00:40:17.780 00:40:17.780 00:40:17.780 Suite: memory 00:40:18.039 Test: alloc and free memory map ...[2024-12-09 09:51:24.855561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:40:18.039 passed 00:40:18.039 Test: mem map translation ...[2024-12-09 09:51:24.924017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:40:18.039 [2024-12-09 09:51:24.924122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:40:18.039 [2024-12-09 09:51:24.924234] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:40:18.039 [2024-12-09 09:51:24.924327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:40:18.039 passed 00:40:18.039 Test: mem map registration ...[2024-12-09 09:51:25.031059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:40:18.039 [2024-12-09 09:51:25.031226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:40:18.039 passed 00:40:18.298 Test: mem map adjacent registrations ...passed 00:40:18.298 00:40:18.298 Run Summary: Type Total Ran Passed Failed Inactive 00:40:18.298 suites 1 1 n/a 0 0 00:40:18.298 tests 4 4 4 0 0 00:40:18.298 asserts 152 152 152 0 n/a 00:40:18.298 00:40:18.298 Elapsed time = 0.363 seconds 00:40:18.298 00:40:18.298 real 0m0.403s 00:40:18.298 user 0m0.365s 00:40:18.298 sys 0m0.029s 00:40:18.298 09:51:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:18.298 ************************************ 00:40:18.298 END TEST env_memory 00:40:18.298 ************************************ 00:40:18.298 09:51:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:40:18.298 09:51:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:40:18.298 09:51:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:18.298 09:51:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:18.298 09:51:25 env -- common/autotest_common.sh@10 -- # set +x 00:40:18.298 ************************************ 00:40:18.298 START TEST env_vtophys 00:40:18.298 ************************************ 00:40:18.298 09:51:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:40:18.298 EAL: lib.eal log level changed from notice to debug 00:40:18.298 EAL: Detected lcore 0 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 1 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 2 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 3 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 4 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 5 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 6 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 7 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 8 as core 0 on socket 0 00:40:18.298 EAL: Detected lcore 9 as core 0 on socket 0 00:40:18.298 EAL: Maximum logical cores by configuration: 128 00:40:18.298 EAL: Detected CPU lcores: 10 00:40:18.298 EAL: Detected NUMA nodes: 1 00:40:18.298 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:40:18.298 EAL: Detected shared linkage of DPDK 00:40:18.298 EAL: No shared files mode enabled, IPC will be disabled 00:40:18.298 EAL: Selected IOVA mode 'PA' 00:40:18.298 EAL: Probing VFIO support... 00:40:18.298 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:40:18.298 EAL: VFIO modules not loaded, skipping VFIO support... 00:40:18.298 EAL: Ask a virtual area of 0x2e000 bytes 00:40:18.298 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:40:18.298 EAL: Setting up physically contiguous memory... 00:40:18.298 EAL: Setting maximum number of open files to 524288 00:40:18.298 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:40:18.298 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:40:18.298 EAL: Ask a virtual area of 0x61000 bytes 00:40:18.298 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:40:18.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:40:18.298 EAL: Ask a virtual area of 0x400000000 bytes 00:40:18.298 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:40:18.298 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:40:18.298 EAL: Ask a virtual area of 0x61000 bytes 00:40:18.298 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:40:18.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:40:18.298 EAL: Ask a virtual area of 0x400000000 bytes 00:40:18.298 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:40:18.298 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:40:18.298 EAL: Ask a virtual area of 0x61000 bytes 00:40:18.298 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:40:18.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:40:18.298 EAL: Ask a virtual area of 0x400000000 bytes 00:40:18.298 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:40:18.298 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:40:18.298 EAL: Ask a virtual area of 0x61000 bytes 00:40:18.298 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:40:18.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:40:18.298 EAL: Ask a virtual area of 0x400000000 bytes 00:40:18.298 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:40:18.298 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:40:18.298 EAL: Hugepages will be freed exactly as allocated. 00:40:18.298 EAL: No shared files mode enabled, IPC is disabled 00:40:18.298 EAL: No shared files mode enabled, IPC is disabled 00:40:18.556 EAL: TSC frequency is ~2200000 KHz 00:40:18.556 EAL: Main lcore 0 is ready (tid=7effc6ae6a40;cpuset=[0]) 00:40:18.556 EAL: Trying to obtain current memory policy. 00:40:18.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:18.556 EAL: Restoring previous memory policy: 0 00:40:18.556 EAL: request: mp_malloc_sync 00:40:18.556 EAL: No shared files mode enabled, IPC is disabled 00:40:18.556 EAL: Heap on socket 0 was expanded by 2MB 00:40:18.556 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:40:18.556 EAL: No PCI address specified using 'addr=' in: bus=pci 00:40:18.556 EAL: Mem event callback 'spdk:(nil)' registered 00:40:18.556 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:40:18.556 00:40:18.556 00:40:18.556 CUnit - A unit testing framework for C - Version 2.1-3 00:40:18.556 http://cunit.sourceforge.net/ 00:40:18.556 00:40:18.556 00:40:18.556 Suite: components_suite 00:40:19.124 Test: vtophys_malloc_test ...passed 00:40:19.124 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:40:19.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.124 EAL: Restoring previous memory policy: 4 00:40:19.124 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.124 EAL: request: mp_malloc_sync 00:40:19.124 EAL: No shared files mode enabled, IPC is disabled 00:40:19.124 EAL: Heap on socket 0 was expanded by 4MB 00:40:19.124 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.124 EAL: request: mp_malloc_sync 00:40:19.124 EAL: No shared files mode enabled, IPC is disabled 00:40:19.124 EAL: Heap on socket 0 was shrunk by 4MB 00:40:19.124 EAL: Trying to obtain current memory policy. 00:40:19.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.124 EAL: Restoring previous memory policy: 4 00:40:19.124 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was expanded by 6MB 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was shrunk by 6MB 00:40:19.125 EAL: Trying to obtain current memory policy. 00:40:19.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.125 EAL: Restoring previous memory policy: 4 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was expanded by 10MB 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was shrunk by 10MB 00:40:19.125 EAL: Trying to obtain current memory policy. 00:40:19.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.125 EAL: Restoring previous memory policy: 4 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was expanded by 18MB 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was shrunk by 18MB 00:40:19.125 EAL: Trying to obtain current memory policy. 00:40:19.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.125 EAL: Restoring previous memory policy: 4 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was expanded by 34MB 00:40:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.125 EAL: request: mp_malloc_sync 00:40:19.125 EAL: No shared files mode enabled, IPC is disabled 00:40:19.125 EAL: Heap on socket 0 was shrunk by 34MB 00:40:19.383 EAL: Trying to obtain current memory policy. 00:40:19.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.383 EAL: Restoring previous memory policy: 4 00:40:19.383 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.383 EAL: request: mp_malloc_sync 00:40:19.383 EAL: No shared files mode enabled, IPC is disabled 00:40:19.383 EAL: Heap on socket 0 was expanded by 66MB 00:40:19.383 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.383 EAL: request: mp_malloc_sync 00:40:19.383 EAL: No shared files mode enabled, IPC is disabled 00:40:19.384 EAL: Heap on socket 0 was shrunk by 66MB 00:40:19.642 EAL: Trying to obtain current memory policy. 00:40:19.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:19.642 EAL: Restoring previous memory policy: 4 00:40:19.642 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.642 EAL: request: mp_malloc_sync 00:40:19.642 EAL: No shared files mode enabled, IPC is disabled 00:40:19.643 EAL: Heap on socket 0 was expanded by 130MB 00:40:19.901 EAL: Calling mem event callback 'spdk:(nil)' 00:40:19.901 EAL: request: mp_malloc_sync 00:40:19.901 EAL: No shared files mode enabled, IPC is disabled 00:40:19.901 EAL: Heap on socket 0 was shrunk by 130MB 00:40:19.901 EAL: Trying to obtain current memory policy. 00:40:19.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:20.160 EAL: Restoring previous memory policy: 4 00:40:20.160 EAL: Calling mem event callback 'spdk:(nil)' 00:40:20.160 EAL: request: mp_malloc_sync 00:40:20.160 EAL: No shared files mode enabled, IPC is disabled 00:40:20.160 EAL: Heap on socket 0 was expanded by 258MB 00:40:20.420 EAL: Calling mem event callback 'spdk:(nil)' 00:40:20.678 EAL: request: mp_malloc_sync 00:40:20.678 EAL: No shared files mode enabled, IPC is disabled 00:40:20.678 EAL: Heap on socket 0 was shrunk by 258MB 00:40:20.936 EAL: Trying to obtain current memory policy. 00:40:20.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:21.194 EAL: Restoring previous memory policy: 4 00:40:21.194 EAL: Calling mem event callback 'spdk:(nil)' 00:40:21.194 EAL: request: mp_malloc_sync 00:40:21.194 EAL: No shared files mode enabled, IPC is disabled 00:40:21.194 EAL: Heap on socket 0 was expanded by 514MB 00:40:21.762 EAL: Calling mem event callback 'spdk:(nil)' 00:40:22.021 EAL: request: mp_malloc_sync 00:40:22.021 EAL: No shared files mode enabled, IPC is disabled 00:40:22.021 EAL: Heap on socket 0 was shrunk by 514MB 00:40:22.587 EAL: Trying to obtain current memory policy. 00:40:22.587 EAL: Setting policy MPOL_PREFERRED for socket 0 00:40:23.248 EAL: Restoring previous memory policy: 4 00:40:23.248 EAL: Calling mem event callback 'spdk:(nil)' 00:40:23.248 EAL: request: mp_malloc_sync 00:40:23.248 EAL: No shared files mode enabled, IPC is disabled 00:40:23.248 EAL: Heap on socket 0 was expanded by 1026MB 00:40:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:40:24.879 EAL: request: mp_malloc_sync 00:40:24.879 EAL: No shared files mode enabled, IPC is disabled 00:40:24.879 EAL: Heap on socket 0 was shrunk by 1026MB 00:40:26.253 passed 00:40:26.253 00:40:26.253 Run Summary: Type Total Ran Passed Failed Inactive 00:40:26.253 suites 1 1 n/a 0 0 00:40:26.253 tests 2 2 2 0 0 00:40:26.253 asserts 5565 5565 5565 0 n/a 00:40:26.253 00:40:26.253 Elapsed time = 7.670 seconds 00:40:26.253 EAL: Calling mem event callback 'spdk:(nil)' 00:40:26.253 EAL: request: mp_malloc_sync 00:40:26.253 EAL: No shared files mode enabled, IPC is disabled 00:40:26.253 EAL: Heap on socket 0 was shrunk by 2MB 00:40:26.253 EAL: No shared files mode enabled, IPC is disabled 00:40:26.253 EAL: No shared files mode enabled, IPC is disabled 00:40:26.253 EAL: No shared files mode enabled, IPC is disabled 00:40:26.253 00:40:26.253 real 0m8.016s 00:40:26.253 user 0m6.733s 00:40:26.253 sys 0m1.114s 00:40:26.253 09:51:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.253 ************************************ 00:40:26.253 END TEST env_vtophys 00:40:26.253 ************************************ 00:40:26.253 09:51:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:40:26.253 09:51:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:40:26.253 09:51:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:26.253 09:51:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.511 09:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:40:26.511 ************************************ 00:40:26.511 START TEST env_pci 00:40:26.511 ************************************ 00:40:26.511 09:51:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:40:26.511 00:40:26.511 00:40:26.511 CUnit - A unit testing framework for C - Version 2.1-3 00:40:26.511 http://cunit.sourceforge.net/ 00:40:26.511 00:40:26.511 00:40:26.511 Suite: pci 00:40:26.511 Test: pci_hook ...[2024-12-09 09:51:33.347273] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58069 has claimed it 00:40:26.511 passed 00:40:26.511 00:40:26.511 Run Summary: Type Total Ran Passed Failed Inactive 00:40:26.511 suites 1 1 n/a 0 0 00:40:26.511 tests 1 1 1 0 0 00:40:26.511 asserts 25 25 25 0 n/a 00:40:26.511 00:40:26.511 Elapsed time = 0.009 secondsEAL: Cannot find device (10000:00:01.0) 00:40:26.511 EAL: Failed to attach device on primary process 00:40:26.511 00:40:26.511 00:40:26.511 real 0m0.087s 00:40:26.511 user 0m0.043s 00:40:26.511 sys 0m0.043s 00:40:26.511 09:51:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.511 ************************************ 00:40:26.511 END TEST env_pci 00:40:26.511 ************************************ 00:40:26.511 09:51:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:40:26.511 09:51:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:40:26.511 09:51:33 env -- env/env.sh@15 -- # uname 00:40:26.511 09:51:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:40:26.511 09:51:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:40:26.511 09:51:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:40:26.511 09:51:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:26.511 09:51:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.511 09:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:40:26.511 ************************************ 00:40:26.511 START TEST env_dpdk_post_init 00:40:26.511 ************************************ 00:40:26.511 09:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:40:26.511 EAL: Detected CPU lcores: 10 00:40:26.511 EAL: Detected NUMA nodes: 1 00:40:26.511 EAL: Detected shared linkage of DPDK 00:40:26.511 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:40:26.511 EAL: Selected IOVA mode 'PA' 00:40:26.769 TELEMETRY: No legacy callbacks, legacy socket not created 00:40:26.769 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:40:26.769 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:40:26.769 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:40:26.769 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:40:26.769 Starting DPDK initialization... 00:40:26.769 Starting SPDK post initialization... 00:40:26.769 SPDK NVMe probe 00:40:26.769 Attaching to 0000:00:10.0 00:40:26.769 Attaching to 0000:00:11.0 00:40:26.769 Attaching to 0000:00:12.0 00:40:26.769 Attaching to 0000:00:13.0 00:40:26.769 Attached to 0000:00:10.0 00:40:26.769 Attached to 0000:00:11.0 00:40:26.769 Attached to 0000:00:13.0 00:40:26.769 Attached to 0000:00:12.0 00:40:26.769 Cleaning up... 00:40:26.769 00:40:26.769 real 0m0.301s 00:40:26.769 user 0m0.109s 00:40:26.769 sys 0m0.095s 00:40:26.769 09:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.769 09:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:40:26.769 ************************************ 00:40:26.769 END TEST env_dpdk_post_init 00:40:26.769 ************************************ 00:40:26.769 09:51:33 env -- env/env.sh@26 -- # uname 00:40:26.769 09:51:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:40:26.769 09:51:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:40:26.769 09:51:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:26.769 09:51:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.769 09:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:40:27.027 ************************************ 00:40:27.027 START TEST env_mem_callbacks 00:40:27.027 ************************************ 00:40:27.027 09:51:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:40:27.027 EAL: Detected CPU lcores: 10 00:40:27.027 EAL: Detected NUMA nodes: 1 00:40:27.027 EAL: Detected shared linkage of DPDK 00:40:27.027 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:40:27.027 EAL: Selected IOVA mode 'PA' 00:40:27.027 TELEMETRY: No legacy callbacks, legacy socket not created 00:40:27.027 00:40:27.027 00:40:27.027 CUnit - A unit testing framework for C - Version 2.1-3 00:40:27.027 http://cunit.sourceforge.net/ 00:40:27.027 00:40:27.027 00:40:27.027 Suite: memory 00:40:27.027 Test: test ... 00:40:27.027 register 0x200000200000 2097152 00:40:27.027 malloc 3145728 00:40:27.027 register 0x200000400000 4194304 00:40:27.027 buf 0x2000004fffc0 len 3145728 PASSED 00:40:27.027 malloc 64 00:40:27.027 buf 0x2000004ffec0 len 64 PASSED 00:40:27.027 malloc 4194304 00:40:27.027 register 0x200000800000 6291456 00:40:27.027 buf 0x2000009fffc0 len 4194304 PASSED 00:40:27.027 free 0x2000004fffc0 3145728 00:40:27.027 free 0x2000004ffec0 64 00:40:27.027 unregister 0x200000400000 4194304 PASSED 00:40:27.027 free 0x2000009fffc0 4194304 00:40:27.027 unregister 0x200000800000 6291456 PASSED 00:40:27.027 malloc 8388608 00:40:27.027 register 0x200000400000 10485760 00:40:27.027 buf 0x2000005fffc0 len 8388608 PASSED 00:40:27.027 free 0x2000005fffc0 8388608 00:40:27.027 unregister 0x200000400000 10485760 PASSED 00:40:27.027 passed 00:40:27.027 00:40:27.027 Run Summary: Type Total Ran Passed Failed Inactive 00:40:27.027 suites 1 1 n/a 0 0 00:40:27.027 tests 1 1 1 0 0 00:40:27.027 asserts 15 15 15 0 n/a 00:40:27.027 00:40:27.027 Elapsed time = 0.061 seconds 00:40:27.284 00:40:27.284 real 0m0.260s 00:40:27.284 user 0m0.084s 00:40:27.284 sys 0m0.074s 00:40:27.284 09:51:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.284 ************************************ 00:40:27.284 END TEST env_mem_callbacks 00:40:27.284 ************************************ 00:40:27.284 09:51:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:40:27.284 00:40:27.284 real 0m9.554s 00:40:27.284 user 0m7.540s 00:40:27.284 sys 0m1.624s 00:40:27.284 09:51:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.284 09:51:34 env -- common/autotest_common.sh@10 -- # set +x 00:40:27.284 ************************************ 00:40:27.284 END TEST env 00:40:27.284 ************************************ 00:40:27.284 09:51:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:40:27.284 09:51:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:27.284 09:51:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.284 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:40:27.284 ************************************ 00:40:27.284 START TEST rpc 00:40:27.284 ************************************ 00:40:27.284 09:51:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:40:27.284 * Looking for test storage... 00:40:27.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:40:27.284 09:51:34 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:27.284 09:51:34 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:40:27.284 09:51:34 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.542 09:51:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.542 09:51:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.542 09:51:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.542 09:51:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.542 09:51:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.542 09:51:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:40:27.542 09:51:34 rpc -- scripts/common.sh@345 -- # : 1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.542 09:51:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.542 09:51:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@353 -- # local d=1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.542 09:51:34 rpc -- scripts/common.sh@355 -- # echo 1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.542 09:51:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@353 -- # local d=2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.542 09:51:34 rpc -- scripts/common.sh@355 -- # echo 2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.542 09:51:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.542 09:51:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.542 09:51:34 rpc -- scripts/common.sh@368 -- # return 0 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.542 --rc genhtml_branch_coverage=1 00:40:27.542 --rc genhtml_function_coverage=1 00:40:27.542 --rc genhtml_legend=1 00:40:27.542 --rc geninfo_all_blocks=1 00:40:27.542 --rc geninfo_unexecuted_blocks=1 00:40:27.542 00:40:27.542 ' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.542 --rc genhtml_branch_coverage=1 00:40:27.542 --rc genhtml_function_coverage=1 00:40:27.542 --rc genhtml_legend=1 00:40:27.542 --rc geninfo_all_blocks=1 00:40:27.542 --rc geninfo_unexecuted_blocks=1 00:40:27.542 00:40:27.542 ' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.542 --rc genhtml_branch_coverage=1 00:40:27.542 --rc genhtml_function_coverage=1 00:40:27.542 --rc genhtml_legend=1 00:40:27.542 --rc geninfo_all_blocks=1 00:40:27.542 --rc geninfo_unexecuted_blocks=1 00:40:27.542 00:40:27.542 ' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.542 --rc genhtml_branch_coverage=1 00:40:27.542 --rc genhtml_function_coverage=1 00:40:27.542 --rc genhtml_legend=1 00:40:27.542 --rc geninfo_all_blocks=1 00:40:27.542 --rc geninfo_unexecuted_blocks=1 00:40:27.542 00:40:27.542 ' 00:40:27.542 09:51:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58196 00:40:27.542 09:51:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:40:27.542 09:51:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:40:27.542 09:51:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58196 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 58196 ']' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:27.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:27.542 09:51:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:40:27.542 [2024-12-09 09:51:34.505790] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:40:27.542 [2024-12-09 09:51:34.506006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58196 ] 00:40:27.800 [2024-12-09 09:51:34.702946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.057 [2024-12-09 09:51:34.865739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:40:28.057 [2024-12-09 09:51:34.865863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58196' to capture a snapshot of events at runtime. 00:40:28.057 [2024-12-09 09:51:34.865884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.057 [2024-12-09 09:51:34.865904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.057 [2024-12-09 09:51:34.865920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58196 for offline analysis/debug. 00:40:28.057 [2024-12-09 09:51:34.867625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.988 09:51:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:28.988 09:51:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:40:28.988 09:51:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:40:28.988 09:51:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:40:28.988 09:51:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:40:28.988 09:51:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:40:28.988 09:51:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:28.988 09:51:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:28.988 09:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:40:28.988 ************************************ 00:40:28.988 START TEST rpc_integrity 00:40:28.988 ************************************ 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:40:28.988 { 00:40:28.988 "name": "Malloc0", 00:40:28.988 "aliases": [ 00:40:28.988 "cf83a3e4-3659-48cb-b7e6-357711aad6f7" 00:40:28.988 ], 00:40:28.988 "product_name": "Malloc disk", 00:40:28.988 "block_size": 512, 00:40:28.988 "num_blocks": 16384, 00:40:28.988 "uuid": "cf83a3e4-3659-48cb-b7e6-357711aad6f7", 00:40:28.988 "assigned_rate_limits": { 00:40:28.988 "rw_ios_per_sec": 0, 00:40:28.988 "rw_mbytes_per_sec": 0, 00:40:28.988 "r_mbytes_per_sec": 0, 00:40:28.988 "w_mbytes_per_sec": 0 00:40:28.988 }, 00:40:28.988 "claimed": false, 00:40:28.988 "zoned": false, 00:40:28.988 "supported_io_types": { 00:40:28.988 "read": true, 00:40:28.988 "write": true, 00:40:28.988 "unmap": true, 00:40:28.988 "flush": true, 00:40:28.988 "reset": true, 00:40:28.988 "nvme_admin": false, 00:40:28.988 "nvme_io": false, 00:40:28.988 "nvme_io_md": false, 00:40:28.988 "write_zeroes": true, 00:40:28.988 "zcopy": true, 00:40:28.988 "get_zone_info": false, 00:40:28.988 "zone_management": false, 00:40:28.988 "zone_append": false, 00:40:28.988 "compare": false, 00:40:28.988 "compare_and_write": false, 00:40:28.988 "abort": true, 00:40:28.988 "seek_hole": false, 00:40:28.988 "seek_data": false, 00:40:28.988 "copy": true, 00:40:28.988 "nvme_iov_md": false 00:40:28.988 }, 00:40:28.988 "memory_domains": [ 00:40:28.988 { 00:40:28.988 "dma_device_id": "system", 00:40:28.988 "dma_device_type": 1 00:40:28.988 }, 00:40:28.988 { 00:40:28.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:28.988 "dma_device_type": 2 00:40:28.988 } 00:40:28.988 ], 00:40:28.988 "driver_specific": {} 00:40:28.988 } 00:40:28.988 ]' 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:40:28.988 09:51:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.988 09:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:28.988 [2024-12-09 09:51:35.997175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:40:28.988 [2024-12-09 09:51:35.997270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:28.988 [2024-12-09 09:51:35.997316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:40:28.988 [2024-12-09 09:51:35.997339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:28.988 [2024-12-09 09:51:36.000758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:28.988 [2024-12-09 09:51:36.000849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:40:28.988 Passthru0 00:40:28.988 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.988 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:40:28.988 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.988 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:40:29.247 { 00:40:29.247 "name": "Malloc0", 00:40:29.247 "aliases": [ 00:40:29.247 "cf83a3e4-3659-48cb-b7e6-357711aad6f7" 00:40:29.247 ], 00:40:29.247 "product_name": "Malloc disk", 00:40:29.247 "block_size": 512, 00:40:29.247 "num_blocks": 16384, 00:40:29.247 "uuid": "cf83a3e4-3659-48cb-b7e6-357711aad6f7", 00:40:29.247 "assigned_rate_limits": { 00:40:29.247 "rw_ios_per_sec": 0, 00:40:29.247 "rw_mbytes_per_sec": 0, 00:40:29.247 "r_mbytes_per_sec": 0, 00:40:29.247 "w_mbytes_per_sec": 0 00:40:29.247 }, 00:40:29.247 "claimed": true, 00:40:29.247 "claim_type": "exclusive_write", 00:40:29.247 "zoned": false, 00:40:29.247 "supported_io_types": { 00:40:29.247 "read": true, 00:40:29.247 "write": true, 00:40:29.247 "unmap": true, 00:40:29.247 "flush": true, 00:40:29.247 "reset": true, 00:40:29.247 "nvme_admin": false, 00:40:29.247 "nvme_io": false, 00:40:29.247 "nvme_io_md": false, 00:40:29.247 "write_zeroes": true, 00:40:29.247 "zcopy": true, 00:40:29.247 "get_zone_info": false, 00:40:29.247 "zone_management": false, 00:40:29.247 "zone_append": false, 00:40:29.247 "compare": false, 00:40:29.247 "compare_and_write": false, 00:40:29.247 "abort": true, 00:40:29.247 "seek_hole": false, 00:40:29.247 "seek_data": false, 00:40:29.247 "copy": true, 00:40:29.247 "nvme_iov_md": false 00:40:29.247 }, 00:40:29.247 "memory_domains": [ 00:40:29.247 { 00:40:29.247 "dma_device_id": "system", 00:40:29.247 "dma_device_type": 1 00:40:29.247 }, 00:40:29.247 { 00:40:29.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:29.247 "dma_device_type": 2 00:40:29.247 } 00:40:29.247 ], 00:40:29.247 "driver_specific": {} 00:40:29.247 }, 00:40:29.247 { 00:40:29.247 "name": "Passthru0", 00:40:29.247 "aliases": [ 00:40:29.247 "5b369647-b744-598d-9090-9def99a458a1" 00:40:29.247 ], 00:40:29.247 "product_name": "passthru", 00:40:29.247 "block_size": 512, 00:40:29.247 "num_blocks": 16384, 00:40:29.247 "uuid": "5b369647-b744-598d-9090-9def99a458a1", 00:40:29.247 "assigned_rate_limits": { 00:40:29.247 "rw_ios_per_sec": 0, 00:40:29.247 "rw_mbytes_per_sec": 0, 00:40:29.247 "r_mbytes_per_sec": 0, 00:40:29.247 "w_mbytes_per_sec": 0 00:40:29.247 }, 00:40:29.247 "claimed": false, 00:40:29.247 "zoned": false, 00:40:29.247 "supported_io_types": { 00:40:29.247 "read": true, 00:40:29.247 "write": true, 00:40:29.247 "unmap": true, 00:40:29.247 "flush": true, 00:40:29.247 "reset": true, 00:40:29.247 "nvme_admin": false, 00:40:29.247 "nvme_io": false, 00:40:29.247 "nvme_io_md": false, 00:40:29.247 "write_zeroes": true, 00:40:29.247 "zcopy": true, 00:40:29.247 "get_zone_info": false, 00:40:29.247 "zone_management": false, 00:40:29.247 "zone_append": false, 00:40:29.247 "compare": false, 00:40:29.247 "compare_and_write": false, 00:40:29.247 "abort": true, 00:40:29.247 "seek_hole": false, 00:40:29.247 "seek_data": false, 00:40:29.247 "copy": true, 00:40:29.247 "nvme_iov_md": false 00:40:29.247 }, 00:40:29.247 "memory_domains": [ 00:40:29.247 { 00:40:29.247 "dma_device_id": "system", 00:40:29.247 "dma_device_type": 1 00:40:29.247 }, 00:40:29.247 { 00:40:29.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:29.247 "dma_device_type": 2 00:40:29.247 } 00:40:29.247 ], 00:40:29.247 "driver_specific": { 00:40:29.247 "passthru": { 00:40:29.247 "name": "Passthru0", 00:40:29.247 "base_bdev_name": "Malloc0" 00:40:29.247 } 00:40:29.247 } 00:40:29.247 } 00:40:29.247 ]' 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:40:29.247 09:51:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:40:29.247 00:40:29.247 real 0m0.346s 00:40:29.247 user 0m0.203s 00:40:29.247 sys 0m0.045s 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.247 09:51:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 ************************************ 00:40:29.247 END TEST rpc_integrity 00:40:29.247 ************************************ 00:40:29.247 09:51:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:40:29.247 09:51:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.247 09:51:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.247 09:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 ************************************ 00:40:29.247 START TEST rpc_plugins 00:40:29.247 ************************************ 00:40:29.247 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:40:29.247 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:40:29.247 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.247 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:40:29.247 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.247 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:40:29.247 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:40:29.247 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.247 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:40:29.248 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.248 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:40:29.248 { 00:40:29.248 "name": "Malloc1", 00:40:29.248 "aliases": [ 00:40:29.248 "2b60b1e4-8c93-4b0a-bd5e-4bc3cacb16d0" 00:40:29.248 ], 00:40:29.248 "product_name": "Malloc disk", 00:40:29.248 "block_size": 4096, 00:40:29.248 "num_blocks": 256, 00:40:29.248 "uuid": "2b60b1e4-8c93-4b0a-bd5e-4bc3cacb16d0", 00:40:29.248 "assigned_rate_limits": { 00:40:29.248 "rw_ios_per_sec": 0, 00:40:29.248 "rw_mbytes_per_sec": 0, 00:40:29.248 "r_mbytes_per_sec": 0, 00:40:29.248 "w_mbytes_per_sec": 0 00:40:29.248 }, 00:40:29.248 "claimed": false, 00:40:29.248 "zoned": false, 00:40:29.248 "supported_io_types": { 00:40:29.248 "read": true, 00:40:29.248 "write": true, 00:40:29.248 "unmap": true, 00:40:29.248 "flush": true, 00:40:29.248 "reset": true, 00:40:29.248 "nvme_admin": false, 00:40:29.248 "nvme_io": false, 00:40:29.248 "nvme_io_md": false, 00:40:29.248 "write_zeroes": true, 00:40:29.248 "zcopy": true, 00:40:29.248 "get_zone_info": false, 00:40:29.248 "zone_management": false, 00:40:29.248 "zone_append": false, 00:40:29.248 "compare": false, 00:40:29.248 "compare_and_write": false, 00:40:29.248 "abort": true, 00:40:29.248 "seek_hole": false, 00:40:29.248 "seek_data": false, 00:40:29.248 "copy": true, 00:40:29.248 "nvme_iov_md": false 00:40:29.248 }, 00:40:29.248 "memory_domains": [ 00:40:29.248 { 00:40:29.248 "dma_device_id": "system", 00:40:29.248 "dma_device_type": 1 00:40:29.248 }, 00:40:29.248 { 00:40:29.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:29.248 "dma_device_type": 2 00:40:29.248 } 00:40:29.248 ], 00:40:29.248 "driver_specific": {} 00:40:29.248 } 00:40:29.248 ]' 00:40:29.248 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:40:29.506 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:40:29.506 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:40:29.506 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.506 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:40:29.506 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.506 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:40:29.506 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.506 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:40:29.506 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.506 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:40:29.506 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:40:29.507 09:51:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:40:29.507 00:40:29.507 real 0m0.171s 00:40:29.507 user 0m0.112s 00:40:29.507 sys 0m0.015s 00:40:29.507 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.507 09:51:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:40:29.507 ************************************ 00:40:29.507 END TEST rpc_plugins 00:40:29.507 ************************************ 00:40:29.507 09:51:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:40:29.507 09:51:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.507 09:51:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.507 09:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:40:29.507 ************************************ 00:40:29.507 START TEST rpc_trace_cmd_test 00:40:29.507 ************************************ 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:40:29.507 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58196", 00:40:29.507 "tpoint_group_mask": "0x8", 00:40:29.507 "iscsi_conn": { 00:40:29.507 "mask": "0x2", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "scsi": { 00:40:29.507 "mask": "0x4", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "bdev": { 00:40:29.507 "mask": "0x8", 00:40:29.507 "tpoint_mask": "0xffffffffffffffff" 00:40:29.507 }, 00:40:29.507 "nvmf_rdma": { 00:40:29.507 "mask": "0x10", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "nvmf_tcp": { 00:40:29.507 "mask": "0x20", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "ftl": { 00:40:29.507 "mask": "0x40", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "blobfs": { 00:40:29.507 "mask": "0x80", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "dsa": { 00:40:29.507 "mask": "0x200", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "thread": { 00:40:29.507 "mask": "0x400", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "nvme_pcie": { 00:40:29.507 "mask": "0x800", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "iaa": { 00:40:29.507 "mask": "0x1000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "nvme_tcp": { 00:40:29.507 "mask": "0x2000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "bdev_nvme": { 00:40:29.507 "mask": "0x4000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "sock": { 00:40:29.507 "mask": "0x8000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "blob": { 00:40:29.507 "mask": "0x10000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "bdev_raid": { 00:40:29.507 "mask": "0x20000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 }, 00:40:29.507 "scheduler": { 00:40:29.507 "mask": "0x40000", 00:40:29.507 "tpoint_mask": "0x0" 00:40:29.507 } 00:40:29.507 }' 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:40:29.507 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:40:29.765 00:40:29.765 real 0m0.272s 00:40:29.765 user 0m0.232s 00:40:29.765 sys 0m0.028s 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.765 ************************************ 00:40:29.765 END TEST rpc_trace_cmd_test 00:40:29.765 09:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:40:29.765 ************************************ 00:40:29.765 09:51:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:40:29.765 09:51:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:40:29.765 09:51:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:40:29.765 09:51:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.765 09:51:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.765 09:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:40:29.765 ************************************ 00:40:29.765 START TEST rpc_daemon_integrity 00:40:29.765 ************************************ 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:40:29.765 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.023 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:40:30.023 { 00:40:30.023 "name": "Malloc2", 00:40:30.024 "aliases": [ 00:40:30.024 "38005c63-7d43-40c5-a5a1-6cdf2648ef4a" 00:40:30.024 ], 00:40:30.024 "product_name": "Malloc disk", 00:40:30.024 "block_size": 512, 00:40:30.024 "num_blocks": 16384, 00:40:30.024 "uuid": "38005c63-7d43-40c5-a5a1-6cdf2648ef4a", 00:40:30.024 "assigned_rate_limits": { 00:40:30.024 "rw_ios_per_sec": 0, 00:40:30.024 "rw_mbytes_per_sec": 0, 00:40:30.024 "r_mbytes_per_sec": 0, 00:40:30.024 "w_mbytes_per_sec": 0 00:40:30.024 }, 00:40:30.024 "claimed": false, 00:40:30.024 "zoned": false, 00:40:30.024 "supported_io_types": { 00:40:30.024 "read": true, 00:40:30.024 "write": true, 00:40:30.024 "unmap": true, 00:40:30.024 "flush": true, 00:40:30.024 "reset": true, 00:40:30.024 "nvme_admin": false, 00:40:30.024 "nvme_io": false, 00:40:30.024 "nvme_io_md": false, 00:40:30.024 "write_zeroes": true, 00:40:30.024 "zcopy": true, 00:40:30.024 "get_zone_info": false, 00:40:30.024 "zone_management": false, 00:40:30.024 "zone_append": false, 00:40:30.024 "compare": false, 00:40:30.024 "compare_and_write": false, 00:40:30.024 "abort": true, 00:40:30.024 "seek_hole": false, 00:40:30.024 "seek_data": false, 00:40:30.024 "copy": true, 00:40:30.024 "nvme_iov_md": false 00:40:30.024 }, 00:40:30.024 "memory_domains": [ 00:40:30.024 { 00:40:30.024 "dma_device_id": "system", 00:40:30.024 "dma_device_type": 1 00:40:30.024 }, 00:40:30.024 { 00:40:30.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:30.024 "dma_device_type": 2 00:40:30.024 } 00:40:30.024 ], 00:40:30.024 "driver_specific": {} 00:40:30.024 } 00:40:30.024 ]' 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.024 [2024-12-09 09:51:36.973470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:40:30.024 [2024-12-09 09:51:36.973557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:30.024 [2024-12-09 09:51:36.973591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:40:30.024 [2024-12-09 09:51:36.973610] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:30.024 [2024-12-09 09:51:36.976761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:30.024 [2024-12-09 09:51:36.976842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:40:30.024 Passthru0 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.024 09:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:40:30.024 { 00:40:30.024 "name": "Malloc2", 00:40:30.024 "aliases": [ 00:40:30.024 "38005c63-7d43-40c5-a5a1-6cdf2648ef4a" 00:40:30.024 ], 00:40:30.024 "product_name": "Malloc disk", 00:40:30.024 "block_size": 512, 00:40:30.024 "num_blocks": 16384, 00:40:30.024 "uuid": "38005c63-7d43-40c5-a5a1-6cdf2648ef4a", 00:40:30.024 "assigned_rate_limits": { 00:40:30.024 "rw_ios_per_sec": 0, 00:40:30.024 "rw_mbytes_per_sec": 0, 00:40:30.024 "r_mbytes_per_sec": 0, 00:40:30.024 "w_mbytes_per_sec": 0 00:40:30.024 }, 00:40:30.024 "claimed": true, 00:40:30.024 "claim_type": "exclusive_write", 00:40:30.024 "zoned": false, 00:40:30.024 "supported_io_types": { 00:40:30.024 "read": true, 00:40:30.024 "write": true, 00:40:30.024 "unmap": true, 00:40:30.024 "flush": true, 00:40:30.024 "reset": true, 00:40:30.024 "nvme_admin": false, 00:40:30.024 "nvme_io": false, 00:40:30.024 "nvme_io_md": false, 00:40:30.024 "write_zeroes": true, 00:40:30.024 "zcopy": true, 00:40:30.024 "get_zone_info": false, 00:40:30.024 "zone_management": false, 00:40:30.024 "zone_append": false, 00:40:30.024 "compare": false, 00:40:30.024 "compare_and_write": false, 00:40:30.024 "abort": true, 00:40:30.024 "seek_hole": false, 00:40:30.024 "seek_data": false, 00:40:30.024 "copy": true, 00:40:30.024 "nvme_iov_md": false 00:40:30.024 }, 00:40:30.024 "memory_domains": [ 00:40:30.024 { 00:40:30.024 "dma_device_id": "system", 00:40:30.024 "dma_device_type": 1 00:40:30.024 }, 00:40:30.024 { 00:40:30.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:30.024 "dma_device_type": 2 00:40:30.024 } 00:40:30.024 ], 00:40:30.024 "driver_specific": {} 00:40:30.024 }, 00:40:30.024 { 00:40:30.024 "name": "Passthru0", 00:40:30.024 "aliases": [ 00:40:30.024 "c6924036-7992-597e-a3dc-6cd3fb66c66b" 00:40:30.024 ], 00:40:30.024 "product_name": "passthru", 00:40:30.024 "block_size": 512, 00:40:30.024 "num_blocks": 16384, 00:40:30.024 "uuid": "c6924036-7992-597e-a3dc-6cd3fb66c66b", 00:40:30.024 "assigned_rate_limits": { 00:40:30.024 "rw_ios_per_sec": 0, 00:40:30.024 "rw_mbytes_per_sec": 0, 00:40:30.024 "r_mbytes_per_sec": 0, 00:40:30.024 "w_mbytes_per_sec": 0 00:40:30.024 }, 00:40:30.024 "claimed": false, 00:40:30.024 "zoned": false, 00:40:30.024 "supported_io_types": { 00:40:30.024 "read": true, 00:40:30.024 "write": true, 00:40:30.024 "unmap": true, 00:40:30.024 "flush": true, 00:40:30.024 "reset": true, 00:40:30.024 "nvme_admin": false, 00:40:30.024 "nvme_io": false, 00:40:30.024 "nvme_io_md": false, 00:40:30.024 "write_zeroes": true, 00:40:30.024 "zcopy": true, 00:40:30.024 "get_zone_info": false, 00:40:30.024 "zone_management": false, 00:40:30.024 "zone_append": false, 00:40:30.024 "compare": false, 00:40:30.024 "compare_and_write": false, 00:40:30.024 "abort": true, 00:40:30.024 "seek_hole": false, 00:40:30.024 "seek_data": false, 00:40:30.024 "copy": true, 00:40:30.024 "nvme_iov_md": false 00:40:30.024 }, 00:40:30.024 "memory_domains": [ 00:40:30.024 { 00:40:30.024 "dma_device_id": "system", 00:40:30.024 "dma_device_type": 1 00:40:30.024 }, 00:40:30.024 { 00:40:30.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:30.024 "dma_device_type": 2 00:40:30.024 } 00:40:30.024 ], 00:40:30.024 "driver_specific": { 00:40:30.024 "passthru": { 00:40:30.024 "name": "Passthru0", 00:40:30.024 "base_bdev_name": "Malloc2" 00:40:30.024 } 00:40:30.024 } 00:40:30.024 } 00:40:30.024 ]' 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.024 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:40:30.282 00:40:30.282 real 0m0.378s 00:40:30.282 user 0m0.240s 00:40:30.282 sys 0m0.045s 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.282 09:51:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:40:30.282 ************************************ 00:40:30.282 END TEST rpc_daemon_integrity 00:40:30.282 ************************************ 00:40:30.282 09:51:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:30.282 09:51:37 rpc -- rpc/rpc.sh@84 -- # killprocess 58196 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 58196 ']' 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@958 -- # kill -0 58196 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@959 -- # uname 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58196 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:30.282 killing process with pid 58196 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58196' 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@973 -- # kill 58196 00:40:30.282 09:51:37 rpc -- common/autotest_common.sh@978 -- # wait 58196 00:40:32.808 00:40:32.808 real 0m5.206s 00:40:32.808 user 0m5.904s 00:40:32.808 sys 0m0.950s 00:40:32.808 09:51:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:32.808 ************************************ 00:40:32.808 END TEST rpc 00:40:32.808 ************************************ 00:40:32.808 09:51:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:40:32.808 09:51:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:40:32.808 09:51:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:32.808 09:51:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:32.808 09:51:39 -- common/autotest_common.sh@10 -- # set +x 00:40:32.808 ************************************ 00:40:32.808 START TEST skip_rpc 00:40:32.808 ************************************ 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:40:32.808 * Looking for test storage... 00:40:32.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:32.808 09:51:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:32.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.808 --rc genhtml_branch_coverage=1 00:40:32.808 --rc genhtml_function_coverage=1 00:40:32.808 --rc genhtml_legend=1 00:40:32.808 --rc geninfo_all_blocks=1 00:40:32.808 --rc geninfo_unexecuted_blocks=1 00:40:32.808 00:40:32.808 ' 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:32.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.808 --rc genhtml_branch_coverage=1 00:40:32.808 --rc genhtml_function_coverage=1 00:40:32.808 --rc genhtml_legend=1 00:40:32.808 --rc geninfo_all_blocks=1 00:40:32.808 --rc geninfo_unexecuted_blocks=1 00:40:32.808 00:40:32.808 ' 00:40:32.808 09:51:39 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:32.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.809 --rc genhtml_branch_coverage=1 00:40:32.809 --rc genhtml_function_coverage=1 00:40:32.809 --rc genhtml_legend=1 00:40:32.809 --rc geninfo_all_blocks=1 00:40:32.809 --rc geninfo_unexecuted_blocks=1 00:40:32.809 00:40:32.809 ' 00:40:32.809 09:51:39 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.809 --rc genhtml_branch_coverage=1 00:40:32.809 --rc genhtml_function_coverage=1 00:40:32.809 --rc genhtml_legend=1 00:40:32.809 --rc geninfo_all_blocks=1 00:40:32.809 --rc geninfo_unexecuted_blocks=1 00:40:32.809 00:40:32.809 ' 00:40:32.809 09:51:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:40:32.809 09:51:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:40:32.809 09:51:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:40:32.809 09:51:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:32.809 09:51:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:32.809 09:51:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:32.809 ************************************ 00:40:32.809 START TEST skip_rpc 00:40:32.809 ************************************ 00:40:32.809 09:51:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:40:32.809 09:51:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58431 00:40:32.809 09:51:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:40:32.809 09:51:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:40:32.809 09:51:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:40:32.809 [2024-12-09 09:51:39.752350] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:40:32.809 [2024-12-09 09:51:39.752588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58431 ] 00:40:33.067 [2024-12-09 09:51:39.942593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.067 [2024-12-09 09:51:40.066095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58431 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58431 ']' 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58431 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58431 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58431' 00:40:38.334 killing process with pid 58431 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58431 00:40:38.334 09:51:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58431 00:40:40.235 00:40:40.235 real 0m7.192s 00:40:40.235 user 0m6.595s 00:40:40.235 sys 0m0.494s 00:40:40.235 09:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.235 09:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:40.235 ************************************ 00:40:40.235 END TEST skip_rpc 00:40:40.235 ************************************ 00:40:40.235 09:51:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:40:40.235 09:51:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:40.235 09:51:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.235 09:51:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:40.235 ************************************ 00:40:40.235 START TEST skip_rpc_with_json 00:40:40.235 ************************************ 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58535 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58535 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58535 ']' 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:40.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:40.235 09:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:40:40.235 [2024-12-09 09:51:46.986765] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:40:40.235 [2024-12-09 09:51:46.986983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58535 ] 00:40:40.235 [2024-12-09 09:51:47.176158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.493 [2024-12-09 09:51:47.302507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:40:41.427 [2024-12-09 09:51:48.176202] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:40:41.427 request: 00:40:41.427 { 00:40:41.427 "trtype": "tcp", 00:40:41.427 "method": "nvmf_get_transports", 00:40:41.427 "req_id": 1 00:40:41.427 } 00:40:41.427 Got JSON-RPC error response 00:40:41.427 response: 00:40:41.427 { 00:40:41.427 "code": -19, 00:40:41.427 "message": "No such device" 00:40:41.427 } 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:40:41.427 [2024-12-09 09:51:48.188407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.427 09:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:40:41.427 { 00:40:41.427 "subsystems": [ 00:40:41.427 { 00:40:41.427 "subsystem": "fsdev", 00:40:41.427 "config": [ 00:40:41.427 { 00:40:41.427 "method": "fsdev_set_opts", 00:40:41.427 "params": { 00:40:41.427 "fsdev_io_pool_size": 65535, 00:40:41.427 "fsdev_io_cache_size": 256 00:40:41.427 } 00:40:41.427 } 00:40:41.427 ] 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "subsystem": "keyring", 00:40:41.427 "config": [] 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "subsystem": "iobuf", 00:40:41.427 "config": [ 00:40:41.427 { 00:40:41.427 "method": "iobuf_set_options", 00:40:41.427 "params": { 00:40:41.427 "small_pool_count": 8192, 00:40:41.427 "large_pool_count": 1024, 00:40:41.427 "small_bufsize": 8192, 00:40:41.427 "large_bufsize": 135168, 00:40:41.427 "enable_numa": false 00:40:41.427 } 00:40:41.427 } 00:40:41.427 ] 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "subsystem": "sock", 00:40:41.427 "config": [ 00:40:41.427 { 00:40:41.427 "method": "sock_set_default_impl", 00:40:41.427 "params": { 00:40:41.427 "impl_name": "posix" 00:40:41.427 } 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "method": "sock_impl_set_options", 00:40:41.427 "params": { 00:40:41.427 "impl_name": "ssl", 00:40:41.427 "recv_buf_size": 4096, 00:40:41.427 "send_buf_size": 4096, 00:40:41.427 "enable_recv_pipe": true, 00:40:41.427 "enable_quickack": false, 00:40:41.427 "enable_placement_id": 0, 00:40:41.427 "enable_zerocopy_send_server": true, 00:40:41.427 "enable_zerocopy_send_client": false, 00:40:41.427 "zerocopy_threshold": 0, 00:40:41.427 "tls_version": 0, 00:40:41.427 "enable_ktls": false 00:40:41.427 } 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "method": "sock_impl_set_options", 00:40:41.427 "params": { 00:40:41.427 "impl_name": "posix", 00:40:41.427 "recv_buf_size": 2097152, 00:40:41.427 "send_buf_size": 2097152, 00:40:41.427 "enable_recv_pipe": true, 00:40:41.427 "enable_quickack": false, 00:40:41.427 "enable_placement_id": 0, 00:40:41.427 "enable_zerocopy_send_server": true, 00:40:41.427 "enable_zerocopy_send_client": false, 00:40:41.427 "zerocopy_threshold": 0, 00:40:41.427 "tls_version": 0, 00:40:41.427 "enable_ktls": false 00:40:41.427 } 00:40:41.427 } 00:40:41.427 ] 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "subsystem": "vmd", 00:40:41.427 "config": [] 00:40:41.427 }, 00:40:41.427 { 00:40:41.427 "subsystem": "accel", 00:40:41.427 "config": [ 00:40:41.428 { 00:40:41.428 "method": "accel_set_options", 00:40:41.428 "params": { 00:40:41.428 "small_cache_size": 128, 00:40:41.428 "large_cache_size": 16, 00:40:41.428 "task_count": 2048, 00:40:41.428 "sequence_count": 2048, 00:40:41.428 "buf_count": 2048 00:40:41.428 } 00:40:41.428 } 00:40:41.428 ] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "bdev", 00:40:41.428 "config": [ 00:40:41.428 { 00:40:41.428 "method": "bdev_set_options", 00:40:41.428 "params": { 00:40:41.428 "bdev_io_pool_size": 65535, 00:40:41.428 "bdev_io_cache_size": 256, 00:40:41.428 "bdev_auto_examine": true, 00:40:41.428 "iobuf_small_cache_size": 128, 00:40:41.428 "iobuf_large_cache_size": 16 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "bdev_raid_set_options", 00:40:41.428 "params": { 00:40:41.428 "process_window_size_kb": 1024, 00:40:41.428 "process_max_bandwidth_mb_sec": 0 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "bdev_iscsi_set_options", 00:40:41.428 "params": { 00:40:41.428 "timeout_sec": 30 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "bdev_nvme_set_options", 00:40:41.428 "params": { 00:40:41.428 "action_on_timeout": "none", 00:40:41.428 "timeout_us": 0, 00:40:41.428 "timeout_admin_us": 0, 00:40:41.428 "keep_alive_timeout_ms": 10000, 00:40:41.428 "arbitration_burst": 0, 00:40:41.428 "low_priority_weight": 0, 00:40:41.428 "medium_priority_weight": 0, 00:40:41.428 "high_priority_weight": 0, 00:40:41.428 "nvme_adminq_poll_period_us": 10000, 00:40:41.428 "nvme_ioq_poll_period_us": 0, 00:40:41.428 "io_queue_requests": 0, 00:40:41.428 "delay_cmd_submit": true, 00:40:41.428 "transport_retry_count": 4, 00:40:41.428 "bdev_retry_count": 3, 00:40:41.428 "transport_ack_timeout": 0, 00:40:41.428 "ctrlr_loss_timeout_sec": 0, 00:40:41.428 "reconnect_delay_sec": 0, 00:40:41.428 "fast_io_fail_timeout_sec": 0, 00:40:41.428 "disable_auto_failback": false, 00:40:41.428 "generate_uuids": false, 00:40:41.428 "transport_tos": 0, 00:40:41.428 "nvme_error_stat": false, 00:40:41.428 "rdma_srq_size": 0, 00:40:41.428 "io_path_stat": false, 00:40:41.428 "allow_accel_sequence": false, 00:40:41.428 "rdma_max_cq_size": 0, 00:40:41.428 "rdma_cm_event_timeout_ms": 0, 00:40:41.428 "dhchap_digests": [ 00:40:41.428 "sha256", 00:40:41.428 "sha384", 00:40:41.428 "sha512" 00:40:41.428 ], 00:40:41.428 "dhchap_dhgroups": [ 00:40:41.428 "null", 00:40:41.428 "ffdhe2048", 00:40:41.428 "ffdhe3072", 00:40:41.428 "ffdhe4096", 00:40:41.428 "ffdhe6144", 00:40:41.428 "ffdhe8192" 00:40:41.428 ] 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "bdev_nvme_set_hotplug", 00:40:41.428 "params": { 00:40:41.428 "period_us": 100000, 00:40:41.428 "enable": false 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "bdev_wait_for_examine" 00:40:41.428 } 00:40:41.428 ] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "scsi", 00:40:41.428 "config": null 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "scheduler", 00:40:41.428 "config": [ 00:40:41.428 { 00:40:41.428 "method": "framework_set_scheduler", 00:40:41.428 "params": { 00:40:41.428 "name": "static" 00:40:41.428 } 00:40:41.428 } 00:40:41.428 ] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "vhost_scsi", 00:40:41.428 "config": [] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "vhost_blk", 00:40:41.428 "config": [] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "ublk", 00:40:41.428 "config": [] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "nbd", 00:40:41.428 "config": [] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "nvmf", 00:40:41.428 "config": [ 00:40:41.428 { 00:40:41.428 "method": "nvmf_set_config", 00:40:41.428 "params": { 00:40:41.428 "discovery_filter": "match_any", 00:40:41.428 "admin_cmd_passthru": { 00:40:41.428 "identify_ctrlr": false 00:40:41.428 }, 00:40:41.428 "dhchap_digests": [ 00:40:41.428 "sha256", 00:40:41.428 "sha384", 00:40:41.428 "sha512" 00:40:41.428 ], 00:40:41.428 "dhchap_dhgroups": [ 00:40:41.428 "null", 00:40:41.428 "ffdhe2048", 00:40:41.428 "ffdhe3072", 00:40:41.428 "ffdhe4096", 00:40:41.428 "ffdhe6144", 00:40:41.428 "ffdhe8192" 00:40:41.428 ] 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "nvmf_set_max_subsystems", 00:40:41.428 "params": { 00:40:41.428 "max_subsystems": 1024 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "nvmf_set_crdt", 00:40:41.428 "params": { 00:40:41.428 "crdt1": 0, 00:40:41.428 "crdt2": 0, 00:40:41.428 "crdt3": 0 00:40:41.428 } 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "method": "nvmf_create_transport", 00:40:41.428 "params": { 00:40:41.428 "trtype": "TCP", 00:40:41.428 "max_queue_depth": 128, 00:40:41.428 "max_io_qpairs_per_ctrlr": 127, 00:40:41.428 "in_capsule_data_size": 4096, 00:40:41.428 "max_io_size": 131072, 00:40:41.428 "io_unit_size": 131072, 00:40:41.428 "max_aq_depth": 128, 00:40:41.428 "num_shared_buffers": 511, 00:40:41.428 "buf_cache_size": 4294967295, 00:40:41.428 "dif_insert_or_strip": false, 00:40:41.428 "zcopy": false, 00:40:41.428 "c2h_success": true, 00:40:41.428 "sock_priority": 0, 00:40:41.428 "abort_timeout_sec": 1, 00:40:41.428 "ack_timeout": 0, 00:40:41.428 "data_wr_pool_size": 0 00:40:41.428 } 00:40:41.428 } 00:40:41.428 ] 00:40:41.428 }, 00:40:41.428 { 00:40:41.428 "subsystem": "iscsi", 00:40:41.428 "config": [ 00:40:41.428 { 00:40:41.428 "method": "iscsi_set_options", 00:40:41.428 "params": { 00:40:41.428 "node_base": "iqn.2016-06.io.spdk", 00:40:41.428 "max_sessions": 128, 00:40:41.428 "max_connections_per_session": 2, 00:40:41.428 "max_queue_depth": 64, 00:40:41.428 "default_time2wait": 2, 00:40:41.428 "default_time2retain": 20, 00:40:41.428 "first_burst_length": 8192, 00:40:41.428 "immediate_data": true, 00:40:41.428 "allow_duplicated_isid": false, 00:40:41.428 "error_recovery_level": 0, 00:40:41.428 "nop_timeout": 60, 00:40:41.428 "nop_in_interval": 30, 00:40:41.428 "disable_chap": false, 00:40:41.428 "require_chap": false, 00:40:41.428 "mutual_chap": false, 00:40:41.428 "chap_group": 0, 00:40:41.428 "max_large_datain_per_connection": 64, 00:40:41.428 "max_r2t_per_connection": 4, 00:40:41.428 "pdu_pool_size": 36864, 00:40:41.428 "immediate_data_pool_size": 16384, 00:40:41.428 "data_out_pool_size": 2048 00:40:41.428 } 00:40:41.428 } 00:40:41.428 ] 00:40:41.428 } 00:40:41.428 ] 00:40:41.428 } 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58535 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58535 ']' 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58535 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58535 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:41.428 killing process with pid 58535 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58535' 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58535 00:40:41.428 09:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58535 00:40:44.075 09:51:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58585 00:40:44.075 09:51:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:40:44.075 09:51:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58585 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58585 ']' 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58585 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58585 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:49.342 killing process with pid 58585 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58585' 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58585 00:40:49.342 09:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58585 00:40:51.287 09:51:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:40:51.287 09:51:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:40:51.287 00:40:51.287 real 0m11.126s 00:40:51.287 user 0m10.474s 00:40:51.287 sys 0m1.022s 00:40:51.287 09:51:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.287 09:51:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:40:51.287 ************************************ 00:40:51.287 END TEST skip_rpc_with_json 00:40:51.287 ************************************ 00:40:51.287 09:51:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:40:51.287 09:51:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:51.287 09:51:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:51.287 09:51:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:51.287 ************************************ 00:40:51.287 START TEST skip_rpc_with_delay 00:40:51.287 ************************************ 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:40:51.287 [2024-12-09 09:51:58.165940] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:51.287 00:40:51.287 real 0m0.199s 00:40:51.287 user 0m0.107s 00:40:51.287 sys 0m0.089s 00:40:51.287 ************************************ 00:40:51.287 END TEST skip_rpc_with_delay 00:40:51.287 ************************************ 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.287 09:51:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:40:51.287 09:51:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:40:51.287 09:51:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:40:51.287 09:51:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:40:51.287 09:51:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:51.287 09:51:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:51.287 09:51:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:51.287 ************************************ 00:40:51.287 START TEST exit_on_failed_rpc_init 00:40:51.287 ************************************ 00:40:51.287 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:40:51.287 09:51:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58719 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58719 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58719 ']' 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:40:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.288 09:51:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:40:51.556 [2024-12-09 09:51:58.410004] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:40:51.556 [2024-12-09 09:51:58.410188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58719 ] 00:40:51.814 [2024-12-09 09:51:58.601463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.814 [2024-12-09 09:51:58.771717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:40:52.751 09:51:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:40:52.751 [2024-12-09 09:51:59.784241] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:40:52.751 [2024-12-09 09:51:59.784446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58737 ] 00:40:53.009 [2024-12-09 09:51:59.979185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.267 [2024-12-09 09:52:00.135928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:53.267 [2024-12-09 09:52:00.136113] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:40:53.268 [2024-12-09 09:52:00.136141] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:40:53.268 [2024-12-09 09:52:00.136168] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58719 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58719 ']' 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58719 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58719 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:53.526 killing process with pid 58719 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58719' 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58719 00:40:53.526 09:52:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58719 00:40:56.055 00:40:56.055 real 0m4.422s 00:40:56.055 user 0m4.860s 00:40:56.055 sys 0m0.683s 00:40:56.055 ************************************ 00:40:56.055 END TEST exit_on_failed_rpc_init 00:40:56.055 ************************************ 00:40:56.055 09:52:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.055 09:52:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:40:56.055 09:52:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:40:56.055 ************************************ 00:40:56.055 END TEST skip_rpc 00:40:56.055 ************************************ 00:40:56.055 00:40:56.055 real 0m23.326s 00:40:56.055 user 0m22.199s 00:40:56.055 sys 0m2.506s 00:40:56.055 09:52:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.055 09:52:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:56.055 09:52:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:40:56.055 09:52:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:56.055 09:52:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.055 09:52:02 -- common/autotest_common.sh@10 -- # set +x 00:40:56.055 ************************************ 00:40:56.055 START TEST rpc_client 00:40:56.055 ************************************ 00:40:56.055 09:52:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:40:56.055 * Looking for test storage... 00:40:56.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:40:56.055 09:52:02 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:56.055 09:52:02 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:40:56.055 09:52:02 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:56.055 09:52:02 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:56.055 09:52:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:56.055 09:52:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:56.055 09:52:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:56.055 09:52:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:40:56.055 09:52:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:56.056 09:52:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:40:56.056 09:52:02 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:56.056 09:52:02 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.056 --rc genhtml_branch_coverage=1 00:40:56.056 --rc genhtml_function_coverage=1 00:40:56.056 --rc genhtml_legend=1 00:40:56.056 --rc geninfo_all_blocks=1 00:40:56.056 --rc geninfo_unexecuted_blocks=1 00:40:56.056 00:40:56.056 ' 00:40:56.056 09:52:02 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.056 --rc genhtml_branch_coverage=1 00:40:56.056 --rc genhtml_function_coverage=1 00:40:56.056 --rc genhtml_legend=1 00:40:56.056 --rc geninfo_all_blocks=1 00:40:56.056 --rc geninfo_unexecuted_blocks=1 00:40:56.056 00:40:56.056 ' 00:40:56.056 09:52:02 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.056 --rc genhtml_branch_coverage=1 00:40:56.056 --rc genhtml_function_coverage=1 00:40:56.056 --rc genhtml_legend=1 00:40:56.056 --rc geninfo_all_blocks=1 00:40:56.056 --rc geninfo_unexecuted_blocks=1 00:40:56.056 00:40:56.056 ' 00:40:56.056 09:52:02 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.056 --rc genhtml_branch_coverage=1 00:40:56.056 --rc genhtml_function_coverage=1 00:40:56.056 --rc genhtml_legend=1 00:40:56.056 --rc geninfo_all_blocks=1 00:40:56.056 --rc geninfo_unexecuted_blocks=1 00:40:56.056 00:40:56.056 ' 00:40:56.056 09:52:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:40:56.056 OK 00:40:56.056 09:52:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:40:56.056 00:40:56.056 real 0m0.259s 00:40:56.056 user 0m0.156s 00:40:56.056 sys 0m0.114s 00:40:56.056 09:52:03 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.056 ************************************ 00:40:56.056 END TEST rpc_client 00:40:56.056 ************************************ 00:40:56.056 09:52:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:40:56.315 09:52:03 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:40:56.315 09:52:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:56.315 09:52:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.315 09:52:03 -- common/autotest_common.sh@10 -- # set +x 00:40:56.315 ************************************ 00:40:56.315 START TEST json_config 00:40:56.315 ************************************ 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:56.315 09:52:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:56.315 09:52:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:40:56.315 09:52:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:40:56.315 09:52:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:40:56.315 09:52:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:56.315 09:52:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:40:56.315 09:52:03 json_config -- scripts/common.sh@345 -- # : 1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:56.315 09:52:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:56.315 09:52:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@353 -- # local d=1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:56.315 09:52:03 json_config -- scripts/common.sh@355 -- # echo 1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:40:56.315 09:52:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@353 -- # local d=2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:56.315 09:52:03 json_config -- scripts/common.sh@355 -- # echo 2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:40:56.315 09:52:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:56.315 09:52:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:56.315 09:52:03 json_config -- scripts/common.sh@368 -- # return 0 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:56.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.315 --rc genhtml_branch_coverage=1 00:40:56.315 --rc genhtml_function_coverage=1 00:40:56.315 --rc genhtml_legend=1 00:40:56.315 --rc geninfo_all_blocks=1 00:40:56.315 --rc geninfo_unexecuted_blocks=1 00:40:56.315 00:40:56.315 ' 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:56.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.315 --rc genhtml_branch_coverage=1 00:40:56.315 --rc genhtml_function_coverage=1 00:40:56.315 --rc genhtml_legend=1 00:40:56.315 --rc geninfo_all_blocks=1 00:40:56.315 --rc geninfo_unexecuted_blocks=1 00:40:56.315 00:40:56.315 ' 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:56.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.315 --rc genhtml_branch_coverage=1 00:40:56.315 --rc genhtml_function_coverage=1 00:40:56.315 --rc genhtml_legend=1 00:40:56.315 --rc geninfo_all_blocks=1 00:40:56.315 --rc geninfo_unexecuted_blocks=1 00:40:56.315 00:40:56.315 ' 00:40:56.315 09:52:03 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:56.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.315 --rc genhtml_branch_coverage=1 00:40:56.315 --rc genhtml_function_coverage=1 00:40:56.315 --rc genhtml_legend=1 00:40:56.315 --rc geninfo_all_blocks=1 00:40:56.315 --rc geninfo_unexecuted_blocks=1 00:40:56.315 00:40:56.315 ' 00:40:56.315 09:52:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a580454-7778-4a91-9746-e8f5310fed33 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1a580454-7778-4a91-9746-e8f5310fed33 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:56.315 09:52:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:40:56.315 09:52:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:56.315 09:52:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:56.315 09:52:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:56.315 09:52:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.315 09:52:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.315 09:52:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.315 09:52:03 json_config -- paths/export.sh@5 -- # export PATH 00:40:56.315 09:52:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@51 -- # : 0 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:56.315 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:56.315 09:52:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:56.315 09:52:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:40:56.315 09:52:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:40:56.316 09:52:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:40:56.316 09:52:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:40:56.316 09:52:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:40:56.316 WARNING: No tests are enabled so not running JSON configuration tests 00:40:56.316 09:52:03 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:40:56.316 09:52:03 json_config -- json_config/json_config.sh@28 -- # exit 0 00:40:56.316 00:40:56.316 real 0m0.184s 00:40:56.316 user 0m0.119s 00:40:56.316 sys 0m0.071s 00:40:56.316 09:52:03 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.316 ************************************ 00:40:56.316 09:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:40:56.316 END TEST json_config 00:40:56.316 ************************************ 00:40:56.316 09:52:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:40:56.316 09:52:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:56.316 09:52:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.316 09:52:03 -- common/autotest_common.sh@10 -- # set +x 00:40:56.316 ************************************ 00:40:56.316 START TEST json_config_extra_key 00:40:56.316 ************************************ 00:40:56.316 09:52:03 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:56.575 09:52:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.575 --rc genhtml_branch_coverage=1 00:40:56.575 --rc genhtml_function_coverage=1 00:40:56.575 --rc genhtml_legend=1 00:40:56.575 --rc geninfo_all_blocks=1 00:40:56.575 --rc geninfo_unexecuted_blocks=1 00:40:56.575 00:40:56.575 ' 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.575 --rc genhtml_branch_coverage=1 00:40:56.575 --rc genhtml_function_coverage=1 00:40:56.575 --rc genhtml_legend=1 00:40:56.575 --rc geninfo_all_blocks=1 00:40:56.575 --rc geninfo_unexecuted_blocks=1 00:40:56.575 00:40:56.575 ' 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.575 --rc genhtml_branch_coverage=1 00:40:56.575 --rc genhtml_function_coverage=1 00:40:56.575 --rc genhtml_legend=1 00:40:56.575 --rc geninfo_all_blocks=1 00:40:56.575 --rc geninfo_unexecuted_blocks=1 00:40:56.575 00:40:56.575 ' 00:40:56.575 09:52:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:56.575 --rc genhtml_branch_coverage=1 00:40:56.575 --rc genhtml_function_coverage=1 00:40:56.575 --rc genhtml_legend=1 00:40:56.576 --rc geninfo_all_blocks=1 00:40:56.576 --rc geninfo_unexecuted_blocks=1 00:40:56.576 00:40:56.576 ' 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a580454-7778-4a91-9746-e8f5310fed33 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1a580454-7778-4a91-9746-e8f5310fed33 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:56.576 09:52:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:40:56.576 09:52:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:56.576 09:52:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:56.576 09:52:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:56.576 09:52:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.576 09:52:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.576 09:52:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.576 09:52:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:40:56.576 09:52:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:56.576 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:56.576 09:52:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:40:56.576 INFO: launching applications... 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:40:56.576 09:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58947 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:40:56.576 Waiting for target to run... 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58947 /var/tmp/spdk_tgt.sock 00:40:56.576 09:52:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:40:56.576 09:52:03 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58947 ']' 00:40:56.576 09:52:03 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:40:56.576 09:52:03 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:56.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:40:56.576 09:52:03 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:40:56.576 09:52:03 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:56.576 09:52:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:40:56.834 [2024-12-09 09:52:03.664477] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:40:56.834 [2024-12-09 09:52:03.664689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58947 ] 00:40:57.400 [2024-12-09 09:52:04.160794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.400 [2024-12-09 09:52:04.304886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.966 09:52:04 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:57.966 00:40:57.966 09:52:04 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:40:57.966 INFO: shutting down applications... 00:40:57.966 09:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:40:57.966 09:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58947 ]] 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58947 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:40:57.966 09:52:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:40:58.530 09:52:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:40:58.530 09:52:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:40:58.530 09:52:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:40:58.530 09:52:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:40:59.096 09:52:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:40:59.096 09:52:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:40:59.096 09:52:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:40:59.096 09:52:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:40:59.662 09:52:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:40:59.662 09:52:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:40:59.662 09:52:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:40:59.663 09:52:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:41:00.235 09:52:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:41:00.235 09:52:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:00.235 09:52:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:41:00.235 09:52:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:41:00.493 09:52:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:41:00.493 09:52:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:00.493 09:52:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:41:00.493 09:52:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58947 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:41:01.060 SPDK target shutdown done 00:41:01.060 09:52:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:41:01.060 Success 00:41:01.060 09:52:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:41:01.060 00:41:01.060 real 0m4.685s 00:41:01.060 user 0m4.070s 00:41:01.060 sys 0m0.694s 00:41:01.060 09:52:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.060 09:52:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:41:01.060 ************************************ 00:41:01.060 END TEST json_config_extra_key 00:41:01.060 ************************************ 00:41:01.060 09:52:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:41:01.060 09:52:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:01.060 09:52:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.060 09:52:08 -- common/autotest_common.sh@10 -- # set +x 00:41:01.060 ************************************ 00:41:01.060 START TEST alias_rpc 00:41:01.060 ************************************ 00:41:01.060 09:52:08 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:41:01.319 * Looking for test storage... 00:41:01.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:01.319 09:52:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.319 --rc genhtml_branch_coverage=1 00:41:01.319 --rc genhtml_function_coverage=1 00:41:01.319 --rc genhtml_legend=1 00:41:01.319 --rc geninfo_all_blocks=1 00:41:01.319 --rc geninfo_unexecuted_blocks=1 00:41:01.319 00:41:01.319 ' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.319 --rc genhtml_branch_coverage=1 00:41:01.319 --rc genhtml_function_coverage=1 00:41:01.319 --rc genhtml_legend=1 00:41:01.319 --rc geninfo_all_blocks=1 00:41:01.319 --rc geninfo_unexecuted_blocks=1 00:41:01.319 00:41:01.319 ' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.319 --rc genhtml_branch_coverage=1 00:41:01.319 --rc genhtml_function_coverage=1 00:41:01.319 --rc genhtml_legend=1 00:41:01.319 --rc geninfo_all_blocks=1 00:41:01.319 --rc geninfo_unexecuted_blocks=1 00:41:01.319 00:41:01.319 ' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.319 --rc genhtml_branch_coverage=1 00:41:01.319 --rc genhtml_function_coverage=1 00:41:01.319 --rc genhtml_legend=1 00:41:01.319 --rc geninfo_all_blocks=1 00:41:01.319 --rc geninfo_unexecuted_blocks=1 00:41:01.319 00:41:01.319 ' 00:41:01.319 09:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:41:01.319 09:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59059 00:41:01.319 09:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:01.319 09:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59059 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59059 ']' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:01.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:01.319 09:52:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:01.577 [2024-12-09 09:52:08.419177] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:01.577 [2024-12-09 09:52:08.419406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:41:01.577 [2024-12-09 09:52:08.613467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.836 [2024-12-09 09:52:08.768486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.771 09:52:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:02.771 09:52:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:41:02.771 09:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:41:03.029 09:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59059 00:41:03.029 09:52:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59059 ']' 00:41:03.029 09:52:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59059 00:41:03.029 09:52:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:41:03.029 09:52:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:03.029 09:52:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59059 00:41:03.029 09:52:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:03.029 09:52:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:03.029 killing process with pid 59059 00:41:03.029 09:52:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59059' 00:41:03.029 09:52:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 59059 00:41:03.029 09:52:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 59059 00:41:05.559 00:41:05.559 real 0m4.239s 00:41:05.559 user 0m4.393s 00:41:05.559 sys 0m0.631s 00:41:05.559 09:52:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:05.559 09:52:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:05.559 ************************************ 00:41:05.559 END TEST alias_rpc 00:41:05.559 ************************************ 00:41:05.559 09:52:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:41:05.559 09:52:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:41:05.559 09:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:05.559 09:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:05.559 09:52:12 -- common/autotest_common.sh@10 -- # set +x 00:41:05.559 ************************************ 00:41:05.559 START TEST spdkcli_tcp 00:41:05.559 ************************************ 00:41:05.559 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:41:05.559 * Looking for test storage... 00:41:05.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:41:05.559 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:05.559 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:05.559 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:05.559 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:05.559 09:52:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:05.560 09:52:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.560 --rc genhtml_branch_coverage=1 00:41:05.560 --rc genhtml_function_coverage=1 00:41:05.560 --rc genhtml_legend=1 00:41:05.560 --rc geninfo_all_blocks=1 00:41:05.560 --rc geninfo_unexecuted_blocks=1 00:41:05.560 00:41:05.560 ' 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.560 --rc genhtml_branch_coverage=1 00:41:05.560 --rc genhtml_function_coverage=1 00:41:05.560 --rc genhtml_legend=1 00:41:05.560 --rc geninfo_all_blocks=1 00:41:05.560 --rc geninfo_unexecuted_blocks=1 00:41:05.560 00:41:05.560 ' 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.560 --rc genhtml_branch_coverage=1 00:41:05.560 --rc genhtml_function_coverage=1 00:41:05.560 --rc genhtml_legend=1 00:41:05.560 --rc geninfo_all_blocks=1 00:41:05.560 --rc geninfo_unexecuted_blocks=1 00:41:05.560 00:41:05.560 ' 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.560 --rc genhtml_branch_coverage=1 00:41:05.560 --rc genhtml_function_coverage=1 00:41:05.560 --rc genhtml_legend=1 00:41:05.560 --rc geninfo_all_blocks=1 00:41:05.560 --rc geninfo_unexecuted_blocks=1 00:41:05.560 00:41:05.560 ' 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59171 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:41:05.560 09:52:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59171 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59171 ']' 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:05.560 09:52:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:05.818 [2024-12-09 09:52:12.682421] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:05.818 [2024-12-09 09:52:12.682631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:41:06.076 [2024-12-09 09:52:12.873730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:06.076 [2024-12-09 09:52:13.045210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.076 [2024-12-09 09:52:13.045226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.010 09:52:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:07.010 09:52:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:07.010 09:52:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59188 00:41:07.010 09:52:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:41:07.010 09:52:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:41:07.269 [ 00:41:07.269 "bdev_malloc_delete", 00:41:07.269 "bdev_malloc_create", 00:41:07.269 "bdev_null_resize", 00:41:07.269 "bdev_null_delete", 00:41:07.269 "bdev_null_create", 00:41:07.269 "bdev_nvme_cuse_unregister", 00:41:07.269 "bdev_nvme_cuse_register", 00:41:07.269 "bdev_opal_new_user", 00:41:07.269 "bdev_opal_set_lock_state", 00:41:07.269 "bdev_opal_delete", 00:41:07.269 "bdev_opal_get_info", 00:41:07.269 "bdev_opal_create", 00:41:07.269 "bdev_nvme_opal_revert", 00:41:07.269 "bdev_nvme_opal_init", 00:41:07.269 "bdev_nvme_send_cmd", 00:41:07.269 "bdev_nvme_set_keys", 00:41:07.269 "bdev_nvme_get_path_iostat", 00:41:07.269 "bdev_nvme_get_mdns_discovery_info", 00:41:07.269 "bdev_nvme_stop_mdns_discovery", 00:41:07.269 "bdev_nvme_start_mdns_discovery", 00:41:07.269 "bdev_nvme_set_multipath_policy", 00:41:07.269 "bdev_nvme_set_preferred_path", 00:41:07.269 "bdev_nvme_get_io_paths", 00:41:07.269 "bdev_nvme_remove_error_injection", 00:41:07.269 "bdev_nvme_add_error_injection", 00:41:07.269 "bdev_nvme_get_discovery_info", 00:41:07.269 "bdev_nvme_stop_discovery", 00:41:07.269 "bdev_nvme_start_discovery", 00:41:07.269 "bdev_nvme_get_controller_health_info", 00:41:07.269 "bdev_nvme_disable_controller", 00:41:07.269 "bdev_nvme_enable_controller", 00:41:07.269 "bdev_nvme_reset_controller", 00:41:07.269 "bdev_nvme_get_transport_statistics", 00:41:07.269 "bdev_nvme_apply_firmware", 00:41:07.269 "bdev_nvme_detach_controller", 00:41:07.269 "bdev_nvme_get_controllers", 00:41:07.269 "bdev_nvme_attach_controller", 00:41:07.269 "bdev_nvme_set_hotplug", 00:41:07.269 "bdev_nvme_set_options", 00:41:07.269 "bdev_passthru_delete", 00:41:07.269 "bdev_passthru_create", 00:41:07.269 "bdev_lvol_set_parent_bdev", 00:41:07.269 "bdev_lvol_set_parent", 00:41:07.269 "bdev_lvol_check_shallow_copy", 00:41:07.269 "bdev_lvol_start_shallow_copy", 00:41:07.269 "bdev_lvol_grow_lvstore", 00:41:07.269 "bdev_lvol_get_lvols", 00:41:07.269 "bdev_lvol_get_lvstores", 00:41:07.269 "bdev_lvol_delete", 00:41:07.269 "bdev_lvol_set_read_only", 00:41:07.269 "bdev_lvol_resize", 00:41:07.269 "bdev_lvol_decouple_parent", 00:41:07.269 "bdev_lvol_inflate", 00:41:07.269 "bdev_lvol_rename", 00:41:07.269 "bdev_lvol_clone_bdev", 00:41:07.269 "bdev_lvol_clone", 00:41:07.269 "bdev_lvol_snapshot", 00:41:07.269 "bdev_lvol_create", 00:41:07.269 "bdev_lvol_delete_lvstore", 00:41:07.269 "bdev_lvol_rename_lvstore", 00:41:07.269 "bdev_lvol_create_lvstore", 00:41:07.269 "bdev_raid_set_options", 00:41:07.269 "bdev_raid_remove_base_bdev", 00:41:07.269 "bdev_raid_add_base_bdev", 00:41:07.269 "bdev_raid_delete", 00:41:07.269 "bdev_raid_create", 00:41:07.269 "bdev_raid_get_bdevs", 00:41:07.269 "bdev_error_inject_error", 00:41:07.269 "bdev_error_delete", 00:41:07.269 "bdev_error_create", 00:41:07.269 "bdev_split_delete", 00:41:07.269 "bdev_split_create", 00:41:07.269 "bdev_delay_delete", 00:41:07.269 "bdev_delay_create", 00:41:07.269 "bdev_delay_update_latency", 00:41:07.269 "bdev_zone_block_delete", 00:41:07.269 "bdev_zone_block_create", 00:41:07.269 "blobfs_create", 00:41:07.269 "blobfs_detect", 00:41:07.269 "blobfs_set_cache_size", 00:41:07.269 "bdev_xnvme_delete", 00:41:07.269 "bdev_xnvme_create", 00:41:07.269 "bdev_aio_delete", 00:41:07.269 "bdev_aio_rescan", 00:41:07.269 "bdev_aio_create", 00:41:07.269 "bdev_ftl_set_property", 00:41:07.269 "bdev_ftl_get_properties", 00:41:07.269 "bdev_ftl_get_stats", 00:41:07.269 "bdev_ftl_unmap", 00:41:07.269 "bdev_ftl_unload", 00:41:07.269 "bdev_ftl_delete", 00:41:07.269 "bdev_ftl_load", 00:41:07.269 "bdev_ftl_create", 00:41:07.269 "bdev_virtio_attach_controller", 00:41:07.269 "bdev_virtio_scsi_get_devices", 00:41:07.269 "bdev_virtio_detach_controller", 00:41:07.269 "bdev_virtio_blk_set_hotplug", 00:41:07.269 "bdev_iscsi_delete", 00:41:07.269 "bdev_iscsi_create", 00:41:07.269 "bdev_iscsi_set_options", 00:41:07.269 "accel_error_inject_error", 00:41:07.269 "ioat_scan_accel_module", 00:41:07.269 "dsa_scan_accel_module", 00:41:07.269 "iaa_scan_accel_module", 00:41:07.269 "keyring_file_remove_key", 00:41:07.269 "keyring_file_add_key", 00:41:07.269 "keyring_linux_set_options", 00:41:07.269 "fsdev_aio_delete", 00:41:07.269 "fsdev_aio_create", 00:41:07.269 "iscsi_get_histogram", 00:41:07.269 "iscsi_enable_histogram", 00:41:07.269 "iscsi_set_options", 00:41:07.269 "iscsi_get_auth_groups", 00:41:07.269 "iscsi_auth_group_remove_secret", 00:41:07.269 "iscsi_auth_group_add_secret", 00:41:07.269 "iscsi_delete_auth_group", 00:41:07.269 "iscsi_create_auth_group", 00:41:07.269 "iscsi_set_discovery_auth", 00:41:07.269 "iscsi_get_options", 00:41:07.269 "iscsi_target_node_request_logout", 00:41:07.269 "iscsi_target_node_set_redirect", 00:41:07.269 "iscsi_target_node_set_auth", 00:41:07.269 "iscsi_target_node_add_lun", 00:41:07.269 "iscsi_get_stats", 00:41:07.269 "iscsi_get_connections", 00:41:07.269 "iscsi_portal_group_set_auth", 00:41:07.269 "iscsi_start_portal_group", 00:41:07.269 "iscsi_delete_portal_group", 00:41:07.269 "iscsi_create_portal_group", 00:41:07.269 "iscsi_get_portal_groups", 00:41:07.269 "iscsi_delete_target_node", 00:41:07.269 "iscsi_target_node_remove_pg_ig_maps", 00:41:07.269 "iscsi_target_node_add_pg_ig_maps", 00:41:07.269 "iscsi_create_target_node", 00:41:07.270 "iscsi_get_target_nodes", 00:41:07.270 "iscsi_delete_initiator_group", 00:41:07.270 "iscsi_initiator_group_remove_initiators", 00:41:07.270 "iscsi_initiator_group_add_initiators", 00:41:07.270 "iscsi_create_initiator_group", 00:41:07.270 "iscsi_get_initiator_groups", 00:41:07.270 "nvmf_set_crdt", 00:41:07.270 "nvmf_set_config", 00:41:07.270 "nvmf_set_max_subsystems", 00:41:07.270 "nvmf_stop_mdns_prr", 00:41:07.270 "nvmf_publish_mdns_prr", 00:41:07.270 "nvmf_subsystem_get_listeners", 00:41:07.270 "nvmf_subsystem_get_qpairs", 00:41:07.270 "nvmf_subsystem_get_controllers", 00:41:07.270 "nvmf_get_stats", 00:41:07.270 "nvmf_get_transports", 00:41:07.270 "nvmf_create_transport", 00:41:07.270 "nvmf_get_targets", 00:41:07.270 "nvmf_delete_target", 00:41:07.270 "nvmf_create_target", 00:41:07.270 "nvmf_subsystem_allow_any_host", 00:41:07.270 "nvmf_subsystem_set_keys", 00:41:07.270 "nvmf_subsystem_remove_host", 00:41:07.270 "nvmf_subsystem_add_host", 00:41:07.270 "nvmf_ns_remove_host", 00:41:07.270 "nvmf_ns_add_host", 00:41:07.270 "nvmf_subsystem_remove_ns", 00:41:07.270 "nvmf_subsystem_set_ns_ana_group", 00:41:07.270 "nvmf_subsystem_add_ns", 00:41:07.270 "nvmf_subsystem_listener_set_ana_state", 00:41:07.270 "nvmf_discovery_get_referrals", 00:41:07.270 "nvmf_discovery_remove_referral", 00:41:07.270 "nvmf_discovery_add_referral", 00:41:07.270 "nvmf_subsystem_remove_listener", 00:41:07.270 "nvmf_subsystem_add_listener", 00:41:07.270 "nvmf_delete_subsystem", 00:41:07.270 "nvmf_create_subsystem", 00:41:07.270 "nvmf_get_subsystems", 00:41:07.270 "env_dpdk_get_mem_stats", 00:41:07.270 "nbd_get_disks", 00:41:07.270 "nbd_stop_disk", 00:41:07.270 "nbd_start_disk", 00:41:07.270 "ublk_recover_disk", 00:41:07.270 "ublk_get_disks", 00:41:07.270 "ublk_stop_disk", 00:41:07.270 "ublk_start_disk", 00:41:07.270 "ublk_destroy_target", 00:41:07.270 "ublk_create_target", 00:41:07.270 "virtio_blk_create_transport", 00:41:07.270 "virtio_blk_get_transports", 00:41:07.270 "vhost_controller_set_coalescing", 00:41:07.270 "vhost_get_controllers", 00:41:07.270 "vhost_delete_controller", 00:41:07.270 "vhost_create_blk_controller", 00:41:07.270 "vhost_scsi_controller_remove_target", 00:41:07.270 "vhost_scsi_controller_add_target", 00:41:07.270 "vhost_start_scsi_controller", 00:41:07.270 "vhost_create_scsi_controller", 00:41:07.270 "thread_set_cpumask", 00:41:07.270 "scheduler_set_options", 00:41:07.270 "framework_get_governor", 00:41:07.270 "framework_get_scheduler", 00:41:07.270 "framework_set_scheduler", 00:41:07.270 "framework_get_reactors", 00:41:07.270 "thread_get_io_channels", 00:41:07.270 "thread_get_pollers", 00:41:07.270 "thread_get_stats", 00:41:07.270 "framework_monitor_context_switch", 00:41:07.270 "spdk_kill_instance", 00:41:07.270 "log_enable_timestamps", 00:41:07.270 "log_get_flags", 00:41:07.270 "log_clear_flag", 00:41:07.270 "log_set_flag", 00:41:07.270 "log_get_level", 00:41:07.270 "log_set_level", 00:41:07.270 "log_get_print_level", 00:41:07.270 "log_set_print_level", 00:41:07.270 "framework_enable_cpumask_locks", 00:41:07.270 "framework_disable_cpumask_locks", 00:41:07.270 "framework_wait_init", 00:41:07.270 "framework_start_init", 00:41:07.270 "scsi_get_devices", 00:41:07.270 "bdev_get_histogram", 00:41:07.270 "bdev_enable_histogram", 00:41:07.270 "bdev_set_qos_limit", 00:41:07.270 "bdev_set_qd_sampling_period", 00:41:07.270 "bdev_get_bdevs", 00:41:07.270 "bdev_reset_iostat", 00:41:07.270 "bdev_get_iostat", 00:41:07.270 "bdev_examine", 00:41:07.270 "bdev_wait_for_examine", 00:41:07.270 "bdev_set_options", 00:41:07.270 "accel_get_stats", 00:41:07.270 "accel_set_options", 00:41:07.270 "accel_set_driver", 00:41:07.270 "accel_crypto_key_destroy", 00:41:07.270 "accel_crypto_keys_get", 00:41:07.270 "accel_crypto_key_create", 00:41:07.270 "accel_assign_opc", 00:41:07.270 "accel_get_module_info", 00:41:07.270 "accel_get_opc_assignments", 00:41:07.270 "vmd_rescan", 00:41:07.270 "vmd_remove_device", 00:41:07.270 "vmd_enable", 00:41:07.270 "sock_get_default_impl", 00:41:07.270 "sock_set_default_impl", 00:41:07.270 "sock_impl_set_options", 00:41:07.270 "sock_impl_get_options", 00:41:07.270 "iobuf_get_stats", 00:41:07.270 "iobuf_set_options", 00:41:07.270 "keyring_get_keys", 00:41:07.270 "framework_get_pci_devices", 00:41:07.270 "framework_get_config", 00:41:07.270 "framework_get_subsystems", 00:41:07.270 "fsdev_set_opts", 00:41:07.270 "fsdev_get_opts", 00:41:07.270 "trace_get_info", 00:41:07.270 "trace_get_tpoint_group_mask", 00:41:07.270 "trace_disable_tpoint_group", 00:41:07.270 "trace_enable_tpoint_group", 00:41:07.270 "trace_clear_tpoint_mask", 00:41:07.270 "trace_set_tpoint_mask", 00:41:07.270 "notify_get_notifications", 00:41:07.270 "notify_get_types", 00:41:07.270 "spdk_get_version", 00:41:07.270 "rpc_get_methods" 00:41:07.270 ] 00:41:07.270 09:52:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:07.270 09:52:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:07.270 09:52:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59171 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59171 ']' 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59171 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:07.270 09:52:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59171 00:41:07.529 09:52:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:07.529 09:52:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:07.529 killing process with pid 59171 00:41:07.529 09:52:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59171' 00:41:07.529 09:52:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59171 00:41:07.529 09:52:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59171 00:41:10.062 00:41:10.062 real 0m4.182s 00:41:10.062 user 0m7.517s 00:41:10.062 sys 0m0.695s 00:41:10.062 09:52:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.062 09:52:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:10.062 ************************************ 00:41:10.062 END TEST spdkcli_tcp 00:41:10.062 ************************************ 00:41:10.062 09:52:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:41:10.062 09:52:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:10.062 09:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:10.062 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:41:10.062 ************************************ 00:41:10.062 START TEST dpdk_mem_utility 00:41:10.062 ************************************ 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:41:10.062 * Looking for test storage... 00:41:10.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:10.062 09:52:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:10.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.062 --rc genhtml_branch_coverage=1 00:41:10.062 --rc genhtml_function_coverage=1 00:41:10.062 --rc genhtml_legend=1 00:41:10.062 --rc geninfo_all_blocks=1 00:41:10.062 --rc geninfo_unexecuted_blocks=1 00:41:10.062 00:41:10.062 ' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:10.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.062 --rc genhtml_branch_coverage=1 00:41:10.062 --rc genhtml_function_coverage=1 00:41:10.062 --rc genhtml_legend=1 00:41:10.062 --rc geninfo_all_blocks=1 00:41:10.062 --rc geninfo_unexecuted_blocks=1 00:41:10.062 00:41:10.062 ' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:10.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.062 --rc genhtml_branch_coverage=1 00:41:10.062 --rc genhtml_function_coverage=1 00:41:10.062 --rc genhtml_legend=1 00:41:10.062 --rc geninfo_all_blocks=1 00:41:10.062 --rc geninfo_unexecuted_blocks=1 00:41:10.062 00:41:10.062 ' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:10.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.062 --rc genhtml_branch_coverage=1 00:41:10.062 --rc genhtml_function_coverage=1 00:41:10.062 --rc genhtml_legend=1 00:41:10.062 --rc geninfo_all_blocks=1 00:41:10.062 --rc geninfo_unexecuted_blocks=1 00:41:10.062 00:41:10.062 ' 00:41:10.062 09:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:41:10.062 09:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59293 00:41:10.062 09:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59293 00:41:10.062 09:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59293 ']' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:10.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:10.062 09:52:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:41:10.062 [2024-12-09 09:52:16.910190] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:10.062 [2024-12-09 09:52:16.910910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59293 ] 00:41:10.063 [2024-12-09 09:52:17.100377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.321 [2024-12-09 09:52:17.221721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.261 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:11.261 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:41:11.261 09:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:41:11.261 09:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:41:11.261 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.261 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:41:11.262 { 00:41:11.262 "filename": "/tmp/spdk_mem_dump.txt" 00:41:11.262 } 00:41:11.262 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.262 09:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:41:11.262 DPDK memory size 824.000000 MiB in 1 heap(s) 00:41:11.262 1 heaps totaling size 824.000000 MiB 00:41:11.262 size: 824.000000 MiB heap id: 0 00:41:11.262 end heaps---------- 00:41:11.262 9 mempools totaling size 603.782043 MiB 00:41:11.262 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:41:11.262 size: 158.602051 MiB name: PDU_data_out_Pool 00:41:11.262 size: 100.555481 MiB name: bdev_io_59293 00:41:11.262 size: 50.003479 MiB name: msgpool_59293 00:41:11.262 size: 36.509338 MiB name: fsdev_io_59293 00:41:11.262 size: 21.763794 MiB name: PDU_Pool 00:41:11.262 size: 19.513306 MiB name: SCSI_TASK_Pool 00:41:11.262 size: 4.133484 MiB name: evtpool_59293 00:41:11.262 size: 0.026123 MiB name: Session_Pool 00:41:11.262 end mempools------- 00:41:11.262 6 memzones totaling size 4.142822 MiB 00:41:11.262 size: 1.000366 MiB name: RG_ring_0_59293 00:41:11.262 size: 1.000366 MiB name: RG_ring_1_59293 00:41:11.262 size: 1.000366 MiB name: RG_ring_4_59293 00:41:11.262 size: 1.000366 MiB name: RG_ring_5_59293 00:41:11.262 size: 0.125366 MiB name: RG_ring_2_59293 00:41:11.262 size: 0.015991 MiB name: RG_ring_3_59293 00:41:11.262 end memzones------- 00:41:11.262 09:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:41:11.262 heap id: 0 total size: 824.000000 MiB number of busy elements: 310 number of free elements: 18 00:41:11.262 list of free elements. size: 16.782593 MiB 00:41:11.262 element at address: 0x200006400000 with size: 1.995972 MiB 00:41:11.262 element at address: 0x20000a600000 with size: 1.995972 MiB 00:41:11.262 element at address: 0x200003e00000 with size: 1.991028 MiB 00:41:11.262 element at address: 0x200019500040 with size: 0.999939 MiB 00:41:11.262 element at address: 0x200019900040 with size: 0.999939 MiB 00:41:11.262 element at address: 0x200019a00000 with size: 0.999084 MiB 00:41:11.262 element at address: 0x200032600000 with size: 0.994324 MiB 00:41:11.262 element at address: 0x200000400000 with size: 0.992004 MiB 00:41:11.262 element at address: 0x200019200000 with size: 0.959656 MiB 00:41:11.262 element at address: 0x200019d00040 with size: 0.936401 MiB 00:41:11.262 element at address: 0x200000200000 with size: 0.716980 MiB 00:41:11.262 element at address: 0x20001b400000 with size: 0.564148 MiB 00:41:11.262 element at address: 0x200000c00000 with size: 0.489197 MiB 00:41:11.262 element at address: 0x200019600000 with size: 0.487976 MiB 00:41:11.262 element at address: 0x200019e00000 with size: 0.485413 MiB 00:41:11.262 element at address: 0x200012c00000 with size: 0.433228 MiB 00:41:11.262 element at address: 0x200028800000 with size: 0.390442 MiB 00:41:11.262 element at address: 0x200000800000 with size: 0.350891 MiB 00:41:11.262 list of standard malloc elements. size: 199.286499 MiB 00:41:11.262 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:41:11.262 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:41:11.262 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:41:11.262 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:41:11.262 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:41:11.262 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:41:11.262 element at address: 0x200019deff40 with size: 0.062683 MiB 00:41:11.262 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:41:11.262 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:41:11.262 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:41:11.262 element at address: 0x200012bff040 with size: 0.000305 MiB 00:41:11.262 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:41:11.262 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:41:11.262 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200000cff000 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff180 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff280 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff380 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff480 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff580 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff680 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff780 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff880 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bff980 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200019affc40 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:41:11.263 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:41:11.264 element at address: 0x200028863f40 with size: 0.000244 MiB 00:41:11.264 element at address: 0x200028864040 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886af80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b080 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b180 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b280 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b380 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b480 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b580 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b680 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b780 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b880 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886b980 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886be80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c080 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c180 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c280 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c380 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c480 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c580 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c680 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c780 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c880 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886c980 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d080 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d180 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d280 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d380 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d480 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d580 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d680 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d780 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d880 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886d980 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886da80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886db80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886de80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886df80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e080 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e180 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e280 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e380 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e480 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e580 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e680 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e780 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e880 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886e980 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f080 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f180 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f280 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f380 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f480 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f580 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f680 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f780 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f880 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886f980 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:41:11.264 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:41:11.264 list of memzone associated elements. size: 607.930908 MiB 00:41:11.264 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:41:11.264 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:41:11.264 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:41:11.264 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:41:11.264 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:41:11.264 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59293_0 00:41:11.264 element at address: 0x200000dff340 with size: 48.003113 MiB 00:41:11.264 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59293_0 00:41:11.264 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:41:11.264 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59293_0 00:41:11.264 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:41:11.264 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:41:11.264 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:41:11.264 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:41:11.264 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:41:11.264 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59293_0 00:41:11.264 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:41:11.264 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59293 00:41:11.264 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:41:11.264 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59293 00:41:11.264 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:41:11.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:41:11.264 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:41:11.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:41:11.264 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:41:11.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:41:11.264 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:41:11.264 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:41:11.264 element at address: 0x200000cff100 with size: 1.000549 MiB 00:41:11.264 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59293 00:41:11.264 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:41:11.264 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59293 00:41:11.264 element at address: 0x200019affd40 with size: 1.000549 MiB 00:41:11.264 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59293 00:41:11.264 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:41:11.264 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59293 00:41:11.264 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:41:11.264 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59293 00:41:11.264 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:41:11.264 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59293 00:41:11.264 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:41:11.264 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:41:11.264 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:41:11.264 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:41:11.264 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:41:11.264 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:41:11.264 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:41:11.264 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59293 00:41:11.264 element at address: 0x20000085df80 with size: 0.125549 MiB 00:41:11.264 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59293 00:41:11.264 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:41:11.264 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:41:11.264 element at address: 0x200028864140 with size: 0.023804 MiB 00:41:11.264 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:41:11.265 element at address: 0x200000859d40 with size: 0.016174 MiB 00:41:11.265 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59293 00:41:11.265 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:41:11.265 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:41:11.265 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:41:11.265 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59293 00:41:11.265 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:41:11.265 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59293 00:41:11.265 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:41:11.265 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59293 00:41:11.265 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:41:11.265 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:41:11.265 09:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:41:11.265 09:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59293 00:41:11.265 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59293 ']' 00:41:11.265 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59293 00:41:11.265 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:41:11.265 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:11.265 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59293 00:41:11.523 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:11.523 killing process with pid 59293 00:41:11.523 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:11.523 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59293' 00:41:11.523 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59293 00:41:11.523 09:52:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59293 00:41:14.049 00:41:14.049 real 0m3.875s 00:41:14.049 user 0m3.908s 00:41:14.049 sys 0m0.659s 00:41:14.049 09:52:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.049 09:52:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:41:14.049 ************************************ 00:41:14.049 END TEST dpdk_mem_utility 00:41:14.049 ************************************ 00:41:14.050 09:52:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:41:14.050 09:52:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:14.050 09:52:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.050 09:52:20 -- common/autotest_common.sh@10 -- # set +x 00:41:14.050 ************************************ 00:41:14.050 START TEST event 00:41:14.050 ************************************ 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:41:14.050 * Looking for test storage... 00:41:14.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:14.050 09:52:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:14.050 09:52:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:14.050 09:52:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:14.050 09:52:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:41:14.050 09:52:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:41:14.050 09:52:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:41:14.050 09:52:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:41:14.050 09:52:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:41:14.050 09:52:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:41:14.050 09:52:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:41:14.050 09:52:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:14.050 09:52:20 event -- scripts/common.sh@344 -- # case "$op" in 00:41:14.050 09:52:20 event -- scripts/common.sh@345 -- # : 1 00:41:14.050 09:52:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:14.050 09:52:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:14.050 09:52:20 event -- scripts/common.sh@365 -- # decimal 1 00:41:14.050 09:52:20 event -- scripts/common.sh@353 -- # local d=1 00:41:14.050 09:52:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:14.050 09:52:20 event -- scripts/common.sh@355 -- # echo 1 00:41:14.050 09:52:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:41:14.050 09:52:20 event -- scripts/common.sh@366 -- # decimal 2 00:41:14.050 09:52:20 event -- scripts/common.sh@353 -- # local d=2 00:41:14.050 09:52:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:14.050 09:52:20 event -- scripts/common.sh@355 -- # echo 2 00:41:14.050 09:52:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:41:14.050 09:52:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:14.050 09:52:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:14.050 09:52:20 event -- scripts/common.sh@368 -- # return 0 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.050 --rc genhtml_branch_coverage=1 00:41:14.050 --rc genhtml_function_coverage=1 00:41:14.050 --rc genhtml_legend=1 00:41:14.050 --rc geninfo_all_blocks=1 00:41:14.050 --rc geninfo_unexecuted_blocks=1 00:41:14.050 00:41:14.050 ' 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.050 --rc genhtml_branch_coverage=1 00:41:14.050 --rc genhtml_function_coverage=1 00:41:14.050 --rc genhtml_legend=1 00:41:14.050 --rc geninfo_all_blocks=1 00:41:14.050 --rc geninfo_unexecuted_blocks=1 00:41:14.050 00:41:14.050 ' 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.050 --rc genhtml_branch_coverage=1 00:41:14.050 --rc genhtml_function_coverage=1 00:41:14.050 --rc genhtml_legend=1 00:41:14.050 --rc geninfo_all_blocks=1 00:41:14.050 --rc geninfo_unexecuted_blocks=1 00:41:14.050 00:41:14.050 ' 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.050 --rc genhtml_branch_coverage=1 00:41:14.050 --rc genhtml_function_coverage=1 00:41:14.050 --rc genhtml_legend=1 00:41:14.050 --rc geninfo_all_blocks=1 00:41:14.050 --rc geninfo_unexecuted_blocks=1 00:41:14.050 00:41:14.050 ' 00:41:14.050 09:52:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:41:14.050 09:52:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:41:14.050 09:52:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:41:14.050 09:52:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.050 09:52:20 event -- common/autotest_common.sh@10 -- # set +x 00:41:14.050 ************************************ 00:41:14.050 START TEST event_perf 00:41:14.050 ************************************ 00:41:14.050 09:52:20 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:41:14.050 Running I/O for 1 seconds...[2024-12-09 09:52:20.741396] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:14.050 [2024-12-09 09:52:20.741671] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:41:14.050 [2024-12-09 09:52:20.926486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:14.308 [2024-12-09 09:52:21.095700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.308 [2024-12-09 09:52:21.095882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:14.308 [2024-12-09 09:52:21.096015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.308 Running I/O for 1 seconds...[2024-12-09 09:52:21.096027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:15.680 00:41:15.680 lcore 0: 192708 00:41:15.680 lcore 1: 192707 00:41:15.680 lcore 2: 192707 00:41:15.680 lcore 3: 192706 00:41:15.680 done. 00:41:15.680 00:41:15.680 real 0m1.637s 00:41:15.680 user 0m4.383s 00:41:15.680 sys 0m0.126s 00:41:15.680 09:52:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.680 ************************************ 00:41:15.680 END TEST event_perf 00:41:15.680 ************************************ 00:41:15.680 09:52:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:41:15.680 09:52:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:41:15.680 09:52:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:15.680 09:52:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:15.680 09:52:22 event -- common/autotest_common.sh@10 -- # set +x 00:41:15.680 ************************************ 00:41:15.680 START TEST event_reactor 00:41:15.680 ************************************ 00:41:15.680 09:52:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:41:15.680 [2024-12-09 09:52:22.434472] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:15.680 [2024-12-09 09:52:22.434701] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:41:15.680 [2024-12-09 09:52:22.608427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.937 [2024-12-09 09:52:22.751275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.310 test_start 00:41:17.310 oneshot 00:41:17.310 tick 100 00:41:17.310 tick 100 00:41:17.310 tick 250 00:41:17.310 tick 100 00:41:17.310 tick 100 00:41:17.310 tick 100 00:41:17.310 tick 250 00:41:17.310 tick 500 00:41:17.310 tick 100 00:41:17.310 tick 100 00:41:17.310 tick 250 00:41:17.310 tick 100 00:41:17.310 tick 100 00:41:17.310 test_end 00:41:17.310 00:41:17.310 real 0m1.591s 00:41:17.310 user 0m1.375s 00:41:17.310 sys 0m0.106s 00:41:17.310 ************************************ 00:41:17.310 END TEST event_reactor 00:41:17.310 ************************************ 00:41:17.310 09:52:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.311 09:52:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:41:17.311 09:52:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:41:17.311 09:52:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:17.311 09:52:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.311 09:52:24 event -- common/autotest_common.sh@10 -- # set +x 00:41:17.311 ************************************ 00:41:17.311 START TEST event_reactor_perf 00:41:17.311 ************************************ 00:41:17.311 09:52:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:41:17.311 [2024-12-09 09:52:24.070720] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:17.311 [2024-12-09 09:52:24.070898] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59477 ] 00:41:17.311 [2024-12-09 09:52:24.244976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.571 [2024-12-09 09:52:24.383585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.947 test_start 00:41:18.947 test_end 00:41:18.947 Performance: 274146 events per second 00:41:18.947 ************************************ 00:41:18.947 END TEST event_reactor_perf 00:41:18.947 ************************************ 00:41:18.947 00:41:18.947 real 0m1.589s 00:41:18.948 user 0m1.389s 00:41:18.948 sys 0m0.089s 00:41:18.948 09:52:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.948 09:52:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:41:18.948 09:52:25 event -- event/event.sh@49 -- # uname -s 00:41:18.948 09:52:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:41:18.948 09:52:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:41:18.948 09:52:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.948 09:52:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.948 09:52:25 event -- common/autotest_common.sh@10 -- # set +x 00:41:18.948 ************************************ 00:41:18.948 START TEST event_scheduler 00:41:18.948 ************************************ 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:41:18.948 * Looking for test storage... 00:41:18.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.948 09:52:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.948 --rc genhtml_branch_coverage=1 00:41:18.948 --rc genhtml_function_coverage=1 00:41:18.948 --rc genhtml_legend=1 00:41:18.948 --rc geninfo_all_blocks=1 00:41:18.948 --rc geninfo_unexecuted_blocks=1 00:41:18.948 00:41:18.948 ' 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.948 --rc genhtml_branch_coverage=1 00:41:18.948 --rc genhtml_function_coverage=1 00:41:18.948 --rc genhtml_legend=1 00:41:18.948 --rc geninfo_all_blocks=1 00:41:18.948 --rc geninfo_unexecuted_blocks=1 00:41:18.948 00:41:18.948 ' 00:41:18.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.948 --rc genhtml_branch_coverage=1 00:41:18.948 --rc genhtml_function_coverage=1 00:41:18.948 --rc genhtml_legend=1 00:41:18.948 --rc geninfo_all_blocks=1 00:41:18.948 --rc geninfo_unexecuted_blocks=1 00:41:18.948 00:41:18.948 ' 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.948 --rc genhtml_branch_coverage=1 00:41:18.948 --rc genhtml_function_coverage=1 00:41:18.948 --rc genhtml_legend=1 00:41:18.948 --rc geninfo_all_blocks=1 00:41:18.948 --rc geninfo_unexecuted_blocks=1 00:41:18.948 00:41:18.948 ' 00:41:18.948 09:52:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:41:18.948 09:52:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59548 00:41:18.948 09:52:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:41:18.948 09:52:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:41:18.948 09:52:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59548 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59548 ']' 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:18.948 09:52:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:41:18.948 [2024-12-09 09:52:25.965084] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:18.948 [2024-12-09 09:52:25.965562] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:41:19.206 [2024-12-09 09:52:26.159556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:19.464 [2024-12-09 09:52:26.330021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.464 [2024-12-09 09:52:26.330177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:19.464 [2024-12-09 09:52:26.330299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:19.464 [2024-12-09 09:52:26.330314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:20.032 09:52:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.032 09:52:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:41:20.032 09:52:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:41:20.032 09:52:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.032 09:52:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:41:20.032 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:41:20.032 POWER: Cannot set governor of lcore 0 to userspace 00:41:20.032 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:41:20.032 POWER: Cannot set governor of lcore 0 to performance 00:41:20.032 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:41:20.032 POWER: Cannot set governor of lcore 0 to userspace 00:41:20.032 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:41:20.032 POWER: Cannot set governor of lcore 0 to userspace 00:41:20.032 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:41:20.032 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:41:20.032 POWER: Unable to set Power Management Environment for lcore 0 00:41:20.032 [2024-12-09 09:52:27.005027] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:41:20.032 [2024-12-09 09:52:27.005056] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:41:20.032 [2024-12-09 09:52:27.005070] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:41:20.032 [2024-12-09 09:52:27.005094] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:41:20.032 [2024-12-09 09:52:27.005107] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:41:20.032 [2024-12-09 09:52:27.005121] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:41:20.032 09:52:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.032 09:52:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:41:20.032 09:52:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.032 09:52:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 [2024-12-09 09:52:27.336291] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:41:20.600 09:52:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:41:20.600 09:52:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.600 09:52:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 ************************************ 00:41:20.600 START TEST scheduler_create_thread 00:41:20.600 ************************************ 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 2 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 3 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 4 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 5 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 6 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 7 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 8 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 9 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.600 10 00:41:20.600 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.601 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:21.167 ************************************ 00:41:21.167 END TEST scheduler_create_thread 00:41:21.167 ************************************ 00:41:21.167 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.167 00:41:21.167 real 0m0.593s 00:41:21.167 user 0m0.014s 00:41:21.167 sys 0m0.007s 00:41:21.167 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.167 09:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:41:21.167 09:52:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:41:21.167 09:52:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59548 00:41:21.167 09:52:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59548 ']' 00:41:21.167 09:52:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59548 00:41:21.167 09:52:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:41:21.167 09:52:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.167 09:52:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59548 00:41:21.167 killing process with pid 59548 00:41:21.167 09:52:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:41:21.168 09:52:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:41:21.168 09:52:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59548' 00:41:21.168 09:52:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59548 00:41:21.168 09:52:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59548 00:41:21.425 [2024-12-09 09:52:28.423065] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:41:22.797 ************************************ 00:41:22.797 END TEST event_scheduler 00:41:22.797 ************************************ 00:41:22.797 00:41:22.797 real 0m3.848s 00:41:22.797 user 0m7.574s 00:41:22.797 sys 0m0.540s 00:41:22.797 09:52:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.797 09:52:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:41:22.797 09:52:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:41:22.797 09:52:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:41:22.797 09:52:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:22.797 09:52:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.797 09:52:29 event -- common/autotest_common.sh@10 -- # set +x 00:41:22.797 ************************************ 00:41:22.797 START TEST app_repeat 00:41:22.797 ************************************ 00:41:22.797 09:52:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:41:22.797 Process app_repeat pid: 59637 00:41:22.797 spdk_app_start Round 0 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59637 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59637' 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:41:22.797 09:52:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:41:22.798 09:52:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:41:22.798 09:52:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:41:22.798 09:52:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:22.798 09:52:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:22.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:22.798 09:52:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:22.798 09:52:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:22.798 09:52:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:41:22.798 [2024-12-09 09:52:29.641240] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:22.798 [2024-12-09 09:52:29.641454] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:41:22.798 [2024-12-09 09:52:29.829817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:23.055 [2024-12-09 09:52:29.963247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.055 [2024-12-09 09:52:29.963293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.621 09:52:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.621 09:52:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:41:23.621 09:52:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:41:24.208 Malloc0 00:41:24.208 09:52:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:41:24.466 Malloc1 00:41:24.466 09:52:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:41:24.466 09:52:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:24.466 09:52:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:24.466 09:52:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:24.466 09:52:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:24.466 09:52:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:24.466 09:52:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:24.467 09:52:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:41:24.724 /dev/nbd0 00:41:24.724 09:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:24.724 09:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:41:24.724 1+0 records in 00:41:24.724 1+0 records out 00:41:24.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333437 s, 12.3 MB/s 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:24.724 09:52:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:41:24.724 09:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:24.724 09:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:24.724 09:52:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:41:24.982 /dev/nbd1 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:41:25.240 1+0 records in 00:41:25.240 1+0 records out 00:41:25.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335309 s, 12.2 MB/s 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:25.240 09:52:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:25.240 09:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:25.525 { 00:41:25.525 "nbd_device": "/dev/nbd0", 00:41:25.525 "bdev_name": "Malloc0" 00:41:25.525 }, 00:41:25.525 { 00:41:25.525 "nbd_device": "/dev/nbd1", 00:41:25.525 "bdev_name": "Malloc1" 00:41:25.525 } 00:41:25.525 ]' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:25.525 { 00:41:25.525 "nbd_device": "/dev/nbd0", 00:41:25.525 "bdev_name": "Malloc0" 00:41:25.525 }, 00:41:25.525 { 00:41:25.525 "nbd_device": "/dev/nbd1", 00:41:25.525 "bdev_name": "Malloc1" 00:41:25.525 } 00:41:25.525 ]' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:25.525 /dev/nbd1' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:25.525 /dev/nbd1' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:41:25.525 256+0 records in 00:41:25.525 256+0 records out 00:41:25.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732164 s, 143 MB/s 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:25.525 256+0 records in 00:41:25.525 256+0 records out 00:41:25.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301256 s, 34.8 MB/s 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:25.525 256+0 records in 00:41:25.525 256+0 records out 00:41:25.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285501 s, 36.7 MB/s 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.525 09:52:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.795 09:52:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:26.053 09:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:26.620 09:52:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:41:26.620 09:52:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:41:27.187 09:52:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:41:28.122 [2024-12-09 09:52:35.053251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:28.380 [2024-12-09 09:52:35.182676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.380 [2024-12-09 09:52:35.182689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.380 [2024-12-09 09:52:35.374168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:41:28.380 [2024-12-09 09:52:35.374283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:41:30.276 spdk_app_start Round 1 00:41:30.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:30.276 09:52:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:41:30.276 09:52:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:41:30.276 09:52:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:41:30.276 09:52:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:41:30.276 09:52:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:30.276 09:52:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:30.277 09:52:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:30.277 09:52:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:30.277 09:52:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:41:30.534 09:52:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.534 09:52:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:41:30.534 09:52:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:41:30.791 Malloc0 00:41:30.791 09:52:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:41:31.049 Malloc1 00:41:31.307 09:52:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:31.307 09:52:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:41:31.564 /dev/nbd0 00:41:31.564 09:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:31.564 09:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:31.564 09:52:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:31.564 09:52:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:41:31.564 09:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:31.564 09:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:41:31.565 1+0 records in 00:41:31.565 1+0 records out 00:41:31.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270003 s, 15.2 MB/s 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:31.565 09:52:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:41:31.565 09:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:31.565 09:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:31.565 09:52:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:41:31.822 /dev/nbd1 00:41:32.080 09:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:32.081 09:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:41:32.081 1+0 records in 00:41:32.081 1+0 records out 00:41:32.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308327 s, 13.3 MB/s 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:32.081 09:52:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:41:32.081 09:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:32.081 09:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:32.081 09:52:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:32.081 09:52:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:32.081 09:52:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:32.338 09:52:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:32.338 { 00:41:32.338 "nbd_device": "/dev/nbd0", 00:41:32.338 "bdev_name": "Malloc0" 00:41:32.338 }, 00:41:32.338 { 00:41:32.338 "nbd_device": "/dev/nbd1", 00:41:32.338 "bdev_name": "Malloc1" 00:41:32.338 } 00:41:32.338 ]' 00:41:32.338 09:52:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:32.338 09:52:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:32.338 { 00:41:32.338 "nbd_device": "/dev/nbd0", 00:41:32.338 "bdev_name": "Malloc0" 00:41:32.338 }, 00:41:32.339 { 00:41:32.339 "nbd_device": "/dev/nbd1", 00:41:32.339 "bdev_name": "Malloc1" 00:41:32.339 } 00:41:32.339 ]' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:32.339 /dev/nbd1' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:32.339 /dev/nbd1' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:41:32.339 256+0 records in 00:41:32.339 256+0 records out 00:41:32.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100187 s, 105 MB/s 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:32.339 256+0 records in 00:41:32.339 256+0 records out 00:41:32.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0386751 s, 27.1 MB/s 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:32.339 256+0 records in 00:41:32.339 256+0 records out 00:41:32.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328814 s, 31.9 MB/s 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:32.339 09:52:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:32.596 09:52:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:32.854 09:52:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:33.111 09:52:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:33.111 09:52:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:33.111 09:52:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:33.112 09:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:33.370 09:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:33.370 09:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:33.370 09:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:33.627 09:52:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:41:33.627 09:52:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:41:33.957 09:52:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:41:35.328 [2024-12-09 09:52:41.980562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:35.328 [2024-12-09 09:52:42.114797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.328 [2024-12-09 09:52:42.114797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:35.328 [2024-12-09 09:52:42.309340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:41:35.328 [2024-12-09 09:52:42.309679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:41:37.226 spdk_app_start Round 2 00:41:37.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:37.226 09:52:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:41:37.226 09:52:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:41:37.226 09:52:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:41:37.226 09:52:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:41:37.226 09:52:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:37.226 09:52:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:37.226 09:52:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:37.226 09:52:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:37.226 09:52:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:41:37.226 09:52:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:37.226 09:52:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:41:37.226 09:52:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:41:37.791 Malloc0 00:41:37.791 09:52:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:41:38.049 Malloc1 00:41:38.049 09:52:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:41:38.049 09:52:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:38.049 09:52:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:38.050 09:52:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:41:38.306 /dev/nbd0 00:41:38.563 09:52:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:38.563 09:52:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:41:38.563 1+0 records in 00:41:38.563 1+0 records out 00:41:38.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375629 s, 10.9 MB/s 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:38.563 09:52:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:41:38.563 09:52:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:38.563 09:52:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:38.563 09:52:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:41:38.821 /dev/nbd1 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:41:38.821 1+0 records in 00:41:38.821 1+0 records out 00:41:38.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382263 s, 10.7 MB/s 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:38.821 09:52:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:38.821 09:52:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:39.387 { 00:41:39.387 "nbd_device": "/dev/nbd0", 00:41:39.387 "bdev_name": "Malloc0" 00:41:39.387 }, 00:41:39.387 { 00:41:39.387 "nbd_device": "/dev/nbd1", 00:41:39.387 "bdev_name": "Malloc1" 00:41:39.387 } 00:41:39.387 ]' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:39.387 { 00:41:39.387 "nbd_device": "/dev/nbd0", 00:41:39.387 "bdev_name": "Malloc0" 00:41:39.387 }, 00:41:39.387 { 00:41:39.387 "nbd_device": "/dev/nbd1", 00:41:39.387 "bdev_name": "Malloc1" 00:41:39.387 } 00:41:39.387 ]' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:39.387 /dev/nbd1' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:39.387 /dev/nbd1' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:41:39.387 256+0 records in 00:41:39.387 256+0 records out 00:41:39.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781461 s, 134 MB/s 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:39.387 256+0 records in 00:41:39.387 256+0 records out 00:41:39.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306235 s, 34.2 MB/s 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:39.387 256+0 records in 00:41:39.387 256+0 records out 00:41:39.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329041 s, 31.9 MB/s 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:39.387 09:52:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:41:39.388 09:52:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:39.388 09:52:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:39.646 09:52:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:39.903 09:52:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:39.904 09:52:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:40.470 09:52:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:41:40.470 09:52:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:41:41.039 09:52:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:41:41.974 [2024-12-09 09:52:48.946719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:42.233 [2024-12-09 09:52:49.123523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.233 [2024-12-09 09:52:49.123544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.491 [2024-12-09 09:52:49.356030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:41:42.491 [2024-12-09 09:52:49.356168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:41:43.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:43.890 09:52:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59637 /var/tmp/spdk-nbd.sock 00:41:43.890 09:52:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:41:43.890 09:52:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:43.890 09:52:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:43.890 09:52:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:43.890 09:52:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:43.890 09:52:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:41:44.148 09:52:51 event.app_repeat -- event/event.sh@39 -- # killprocess 59637 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59637 ']' 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59637 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59637 00:41:44.148 killing process with pid 59637 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59637' 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59637 00:41:44.148 09:52:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59637 00:41:45.081 spdk_app_start is called in Round 0. 00:41:45.081 Shutdown signal received, stop current app iteration 00:41:45.081 Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 reinitialization... 00:41:45.081 spdk_app_start is called in Round 1. 00:41:45.081 Shutdown signal received, stop current app iteration 00:41:45.081 Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 reinitialization... 00:41:45.081 spdk_app_start is called in Round 2. 00:41:45.081 Shutdown signal received, stop current app iteration 00:41:45.081 Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 reinitialization... 00:41:45.081 spdk_app_start is called in Round 3. 00:41:45.081 Shutdown signal received, stop current app iteration 00:41:45.081 09:52:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:41:45.082 09:52:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:41:45.082 00:41:45.082 real 0m22.468s 00:41:45.082 user 0m50.049s 00:41:45.082 sys 0m3.284s 00:41:45.082 09:52:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:45.082 09:52:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:41:45.082 ************************************ 00:41:45.082 END TEST app_repeat 00:41:45.082 ************************************ 00:41:45.082 09:52:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:41:45.082 09:52:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:41:45.082 09:52:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:45.082 09:52:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:45.082 09:52:52 event -- common/autotest_common.sh@10 -- # set +x 00:41:45.082 ************************************ 00:41:45.082 START TEST cpu_locks 00:41:45.082 ************************************ 00:41:45.082 09:52:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:41:45.340 * Looking for test storage... 00:41:45.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:45.340 09:52:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:45.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.340 --rc genhtml_branch_coverage=1 00:41:45.340 --rc genhtml_function_coverage=1 00:41:45.340 --rc genhtml_legend=1 00:41:45.340 --rc geninfo_all_blocks=1 00:41:45.340 --rc geninfo_unexecuted_blocks=1 00:41:45.340 00:41:45.340 ' 00:41:45.340 09:52:52 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:45.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.340 --rc genhtml_branch_coverage=1 00:41:45.340 --rc genhtml_function_coverage=1 00:41:45.340 --rc genhtml_legend=1 00:41:45.340 --rc geninfo_all_blocks=1 00:41:45.340 --rc geninfo_unexecuted_blocks=1 00:41:45.340 00:41:45.340 ' 00:41:45.341 09:52:52 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:45.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.341 --rc genhtml_branch_coverage=1 00:41:45.341 --rc genhtml_function_coverage=1 00:41:45.341 --rc genhtml_legend=1 00:41:45.341 --rc geninfo_all_blocks=1 00:41:45.341 --rc geninfo_unexecuted_blocks=1 00:41:45.341 00:41:45.341 ' 00:41:45.341 09:52:52 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:45.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.341 --rc genhtml_branch_coverage=1 00:41:45.341 --rc genhtml_function_coverage=1 00:41:45.341 --rc genhtml_legend=1 00:41:45.341 --rc geninfo_all_blocks=1 00:41:45.341 --rc geninfo_unexecuted_blocks=1 00:41:45.341 00:41:45.341 ' 00:41:45.341 09:52:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:41:45.341 09:52:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:41:45.341 09:52:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:41:45.341 09:52:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:41:45.341 09:52:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:45.341 09:52:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:45.341 09:52:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:41:45.341 ************************************ 00:41:45.341 START TEST default_locks 00:41:45.341 ************************************ 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60130 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60130 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60130 ']' 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:41:45.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:45.341 09:52:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:41:45.598 [2024-12-09 09:52:52.433709] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:45.598 [2024-12-09 09:52:52.433907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60130 ] 00:41:45.598 [2024-12-09 09:52:52.625067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.856 [2024-12-09 09:52:52.789515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.788 09:52:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:46.788 09:52:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:41:46.788 09:52:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60130 00:41:46.788 09:52:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60130 00:41:46.788 09:52:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:41:47.045 09:52:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60130 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60130 ']' 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60130 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60130 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:47.046 killing process with pid 60130 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60130' 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60130 00:41:47.046 09:52:54 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60130 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60130 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60130 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60130 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60130 ']' 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:49.582 ERROR: process (pid: 60130) is no longer running 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:41:49.582 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60130) - No such process 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:41:49.582 00:41:49.582 real 0m3.991s 00:41:49.582 user 0m3.928s 00:41:49.582 sys 0m0.697s 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:49.582 ************************************ 00:41:49.582 END TEST default_locks 00:41:49.582 ************************************ 00:41:49.582 09:52:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:41:49.582 09:52:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:41:49.582 09:52:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:49.582 09:52:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:49.582 09:52:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:41:49.582 ************************************ 00:41:49.582 START TEST default_locks_via_rpc 00:41:49.582 ************************************ 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60205 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60205 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60205 ']' 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:49.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:49.582 09:52:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:49.582 [2024-12-09 09:52:56.469792] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:49.582 [2024-12-09 09:52:56.469976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:41:49.840 [2024-12-09 09:52:56.655804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.840 [2024-12-09 09:52:56.786808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60205 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60205 00:41:50.773 09:52:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60205 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60205 ']' 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60205 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60205 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.339 killing process with pid 60205 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60205' 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60205 00:41:51.339 09:52:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60205 00:41:53.867 00:41:53.867 real 0m4.049s 00:41:53.867 user 0m4.036s 00:41:53.867 sys 0m0.734s 00:41:53.867 09:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:53.867 09:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:53.867 ************************************ 00:41:53.867 END TEST default_locks_via_rpc 00:41:53.867 ************************************ 00:41:53.867 09:53:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:41:53.867 09:53:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:53.867 09:53:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:53.867 09:53:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:41:53.867 ************************************ 00:41:53.867 START TEST non_locking_app_on_locked_coremask 00:41:53.867 ************************************ 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60274 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60274 /var/tmp/spdk.sock 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60274 ']' 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:53.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:53.867 09:53:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:41:53.867 [2024-12-09 09:53:00.572893] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:53.867 [2024-12-09 09:53:00.573113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:41:53.867 [2024-12-09 09:53:00.753198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:53.867 [2024-12-09 09:53:00.886309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60295 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60295 /var/tmp/spdk2.sock 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60295 ']' 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:54.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:54.801 09:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:41:55.059 [2024-12-09 09:53:01.932089] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:55.059 [2024-12-09 09:53:01.932262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ] 00:41:55.316 [2024-12-09 09:53:02.131092] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:41:55.316 [2024-12-09 09:53:02.131167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.574 [2024-12-09 09:53:02.395374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:58.128 09:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:58.128 09:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:41:58.128 09:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60274 00:41:58.128 09:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60274 00:41:58.128 09:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:41:58.385 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60274 00:41:58.385 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60274 ']' 00:41:58.385 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60274 00:41:58.385 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:41:58.385 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:58.386 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60274 00:41:58.386 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:58.386 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:58.386 killing process with pid 60274 00:41:58.386 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60274' 00:41:58.386 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60274 00:41:58.386 09:53:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60274 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60295 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60295 ']' 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60295 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60295 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:03.651 killing process with pid 60295 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60295' 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60295 00:42:03.651 09:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60295 00:42:05.580 00:42:05.580 real 0m11.937s 00:42:05.580 user 0m12.493s 00:42:05.580 sys 0m1.433s 00:42:05.580 09:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:05.580 09:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:05.580 ************************************ 00:42:05.580 END TEST non_locking_app_on_locked_coremask 00:42:05.580 ************************************ 00:42:05.580 09:53:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:42:05.580 09:53:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:05.580 09:53:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:05.580 09:53:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:05.580 ************************************ 00:42:05.580 START TEST locking_app_on_unlocked_coremask 00:42:05.580 ************************************ 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60443 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60443 /var/tmp/spdk.sock 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60443 ']' 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:05.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:05.580 09:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:05.580 [2024-12-09 09:53:12.571775] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:05.580 [2024-12-09 09:53:12.571957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60443 ] 00:42:05.838 [2024-12-09 09:53:12.747612] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:05.838 [2024-12-09 09:53:12.747725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:06.096 [2024-12-09 09:53:12.882296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60465 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60465 /var/tmp/spdk2.sock 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60465 ']' 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:07.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:07.032 09:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:07.033 [2024-12-09 09:53:13.917990] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:07.033 [2024-12-09 09:53:13.918182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60465 ] 00:42:07.291 [2024-12-09 09:53:14.123766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.549 [2024-12-09 09:53:14.402670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.195 09:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:10.195 09:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:10.195 09:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60465 00:42:10.195 09:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60465 00:42:10.195 09:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60443 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60443 ']' 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60443 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60443 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:10.763 killing process with pid 60443 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60443' 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60443 00:42:10.763 09:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60443 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60465 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60465 ']' 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60465 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60465 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:16.092 killing process with pid 60465 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60465' 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60465 00:42:16.092 09:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60465 00:42:17.991 00:42:17.991 real 0m12.434s 00:42:17.991 user 0m13.187s 00:42:17.991 sys 0m1.562s 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:17.991 ************************************ 00:42:17.991 END TEST locking_app_on_unlocked_coremask 00:42:17.991 ************************************ 00:42:17.991 09:53:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:42:17.991 09:53:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:17.991 09:53:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:17.991 09:53:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:17.991 ************************************ 00:42:17.991 START TEST locking_app_on_locked_coremask 00:42:17.991 ************************************ 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60626 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60626 /var/tmp/spdk.sock 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60626 ']' 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.991 09:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:18.249 [2024-12-09 09:53:25.055088] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:18.249 [2024-12-09 09:53:25.056123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60626 ] 00:42:18.249 [2024-12-09 09:53:25.243489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.507 [2024-12-09 09:53:25.374801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60642 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60642 /var/tmp/spdk2.sock 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60642 /var/tmp/spdk2.sock 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60642 /var/tmp/spdk2.sock 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60642 ']' 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:19.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:19.441 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:19.442 09:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:19.442 [2024-12-09 09:53:26.356762] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:19.442 [2024-12-09 09:53:26.356921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60642 ] 00:42:19.700 [2024-12-09 09:53:26.552764] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60626 has claimed it. 00:42:19.700 [2024-12-09 09:53:26.552873] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:42:20.265 ERROR: process (pid: 60642) is no longer running 00:42:20.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60642) - No such process 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60626 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:20.265 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60626 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60626 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60626 ']' 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60626 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60626 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:20.523 killing process with pid 60626 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60626' 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60626 00:42:20.523 09:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60626 00:42:23.053 00:42:23.053 real 0m4.938s 00:42:23.053 user 0m5.331s 00:42:23.053 sys 0m0.899s 00:42:23.053 09:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:23.053 09:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:23.053 ************************************ 00:42:23.053 END TEST locking_app_on_locked_coremask 00:42:23.053 ************************************ 00:42:23.053 09:53:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:42:23.053 09:53:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:23.053 09:53:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:23.053 09:53:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:23.053 ************************************ 00:42:23.053 START TEST locking_overlapped_coremask 00:42:23.053 ************************************ 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60706 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60706 /var/tmp/spdk.sock 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60706 ']' 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.053 09:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:23.053 [2024-12-09 09:53:30.021150] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:23.053 [2024-12-09 09:53:30.021378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:42:23.311 [2024-12-09 09:53:30.196360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:23.312 [2024-12-09 09:53:30.330844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.312 [2024-12-09 09:53:30.330910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.312 [2024-12-09 09:53:30.330919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60735 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60735 /var/tmp/spdk2.sock 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60735 /var/tmp/spdk2.sock 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60735 /var/tmp/spdk2.sock 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60735 ']' 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:24.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:24.247 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:24.506 [2024-12-09 09:53:31.331127] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:24.506 [2024-12-09 09:53:31.331619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60735 ] 00:42:24.506 [2024-12-09 09:53:31.539195] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60706 has claimed it. 00:42:24.506 [2024-12-09 09:53:31.539308] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:42:25.073 ERROR: process (pid: 60735) is no longer running 00:42:25.073 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60735) - No such process 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60706 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60706 ']' 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60706 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60706 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60706' 00:42:25.073 killing process with pid 60706 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60706 00:42:25.073 09:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60706 00:42:27.601 00:42:27.601 real 0m4.494s 00:42:27.601 user 0m12.209s 00:42:27.601 sys 0m0.658s 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:27.601 ************************************ 00:42:27.601 END TEST locking_overlapped_coremask 00:42:27.601 ************************************ 00:42:27.601 09:53:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:42:27.601 09:53:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:27.601 09:53:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:27.601 09:53:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:27.601 ************************************ 00:42:27.601 START TEST locking_overlapped_coremask_via_rpc 00:42:27.601 ************************************ 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:42:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60799 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60799 /var/tmp/spdk.sock 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60799 ']' 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:27.601 09:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:27.601 [2024-12-09 09:53:34.580350] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:27.601 [2024-12-09 09:53:34.580983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60799 ] 00:42:27.859 [2024-12-09 09:53:34.804961] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:27.859 [2024-12-09 09:53:34.805445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:28.117 [2024-12-09 09:53:34.978756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.117 [2024-12-09 09:53:34.978856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:28.117 [2024-12-09 09:53:34.978859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60817 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60817 /var/tmp/spdk2.sock 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60817 ']' 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:29.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.049 09:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:29.308 [2024-12-09 09:53:36.179728] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:29.308 [2024-12-09 09:53:36.180176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60817 ] 00:42:29.566 [2024-12-09 09:53:36.384801] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:29.566 [2024-12-09 09:53:36.384885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:29.823 [2024-12-09 09:53:36.650835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:29.823 [2024-12-09 09:53:36.654378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:29.823 [2024-12-09 09:53:36.654414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.366 [2024-12-09 09:53:39.101607] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60799 has claimed it. 00:42:32.366 request: 00:42:32.366 { 00:42:32.366 "method": "framework_enable_cpumask_locks", 00:42:32.366 "req_id": 1 00:42:32.366 } 00:42:32.366 Got JSON-RPC error response 00:42:32.366 response: 00:42:32.366 { 00:42:32.366 "code": -32603, 00:42:32.366 "message": "Failed to claim CPU core: 2" 00:42:32.366 } 00:42:32.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60799 /var/tmp/spdk.sock 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60799 ']' 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60817 /var/tmp/spdk2.sock 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60817 ']' 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:32.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:32.366 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:42:32.933 00:42:32.933 real 0m5.293s 00:42:32.933 user 0m1.977s 00:42:32.933 sys 0m0.254s 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:32.933 09:53:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.933 ************************************ 00:42:32.933 END TEST locking_overlapped_coremask_via_rpc 00:42:32.933 ************************************ 00:42:32.933 09:53:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:42:32.933 09:53:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60799 ]] 00:42:32.933 09:53:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60799 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60799 ']' 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60799 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60799 00:42:32.933 killing process with pid 60799 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60799' 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60799 00:42:32.933 09:53:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60799 00:42:35.460 09:53:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60817 ]] 00:42:35.460 09:53:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60817 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60817 ']' 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60817 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60817 00:42:35.460 killing process with pid 60817 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60817' 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60817 00:42:35.460 09:53:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60817 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:42:37.392 Process with pid 60799 is not found 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60799 ]] 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60799 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60799 ']' 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60799 00:42:37.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60799) - No such process 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60799 is not found' 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60817 ]] 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60817 00:42:37.392 Process with pid 60817 is not found 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60817 ']' 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60817 00:42:37.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60817) - No such process 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60817 is not found' 00:42:37.392 09:53:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:42:37.392 ************************************ 00:42:37.392 END TEST cpu_locks 00:42:37.392 ************************************ 00:42:37.392 00:42:37.392 real 0m52.261s 00:42:37.392 user 1m30.953s 00:42:37.392 sys 0m7.449s 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:37.392 09:53:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:37.392 ************************************ 00:42:37.392 END TEST event 00:42:37.392 ************************************ 00:42:37.392 00:42:37.392 real 1m23.877s 00:42:37.392 user 2m35.913s 00:42:37.392 sys 0m11.862s 00:42:37.393 09:53:44 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:37.393 09:53:44 event -- common/autotest_common.sh@10 -- # set +x 00:42:37.651 09:53:44 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:42:37.651 09:53:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:37.651 09:53:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:37.651 09:53:44 -- common/autotest_common.sh@10 -- # set +x 00:42:37.651 ************************************ 00:42:37.651 START TEST thread 00:42:37.651 ************************************ 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:42:37.651 * Looking for test storage... 00:42:37.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:37.651 09:53:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:37.651 09:53:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:37.651 09:53:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:37.651 09:53:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:42:37.651 09:53:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:42:37.651 09:53:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:42:37.651 09:53:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:42:37.651 09:53:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:42:37.651 09:53:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:42:37.651 09:53:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:42:37.651 09:53:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:37.651 09:53:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:42:37.651 09:53:44 thread -- scripts/common.sh@345 -- # : 1 00:42:37.651 09:53:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:37.651 09:53:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:37.651 09:53:44 thread -- scripts/common.sh@365 -- # decimal 1 00:42:37.651 09:53:44 thread -- scripts/common.sh@353 -- # local d=1 00:42:37.651 09:53:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:37.651 09:53:44 thread -- scripts/common.sh@355 -- # echo 1 00:42:37.651 09:53:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:42:37.651 09:53:44 thread -- scripts/common.sh@366 -- # decimal 2 00:42:37.651 09:53:44 thread -- scripts/common.sh@353 -- # local d=2 00:42:37.651 09:53:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:37.651 09:53:44 thread -- scripts/common.sh@355 -- # echo 2 00:42:37.651 09:53:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:42:37.651 09:53:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:37.651 09:53:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:37.651 09:53:44 thread -- scripts/common.sh@368 -- # return 0 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.651 --rc genhtml_branch_coverage=1 00:42:37.651 --rc genhtml_function_coverage=1 00:42:37.651 --rc genhtml_legend=1 00:42:37.651 --rc geninfo_all_blocks=1 00:42:37.651 --rc geninfo_unexecuted_blocks=1 00:42:37.651 00:42:37.651 ' 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.651 --rc genhtml_branch_coverage=1 00:42:37.651 --rc genhtml_function_coverage=1 00:42:37.651 --rc genhtml_legend=1 00:42:37.651 --rc geninfo_all_blocks=1 00:42:37.651 --rc geninfo_unexecuted_blocks=1 00:42:37.651 00:42:37.651 ' 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.651 --rc genhtml_branch_coverage=1 00:42:37.651 --rc genhtml_function_coverage=1 00:42:37.651 --rc genhtml_legend=1 00:42:37.651 --rc geninfo_all_blocks=1 00:42:37.651 --rc geninfo_unexecuted_blocks=1 00:42:37.651 00:42:37.651 ' 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.651 --rc genhtml_branch_coverage=1 00:42:37.651 --rc genhtml_function_coverage=1 00:42:37.651 --rc genhtml_legend=1 00:42:37.651 --rc geninfo_all_blocks=1 00:42:37.651 --rc geninfo_unexecuted_blocks=1 00:42:37.651 00:42:37.651 ' 00:42:37.651 09:53:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:37.651 09:53:44 thread -- common/autotest_common.sh@10 -- # set +x 00:42:37.651 ************************************ 00:42:37.651 START TEST thread_poller_perf 00:42:37.651 ************************************ 00:42:37.651 09:53:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:42:37.651 [2024-12-09 09:53:44.689646] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:37.651 [2024-12-09 09:53:44.690324] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61018 ] 00:42:37.909 [2024-12-09 09:53:44.889936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.167 [2024-12-09 09:53:45.025025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:38.167 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:42:39.546 [2024-12-09T09:53:46.590Z] ====================================== 00:42:39.546 [2024-12-09T09:53:46.590Z] busy:2211410533 (cyc) 00:42:39.546 [2024-12-09T09:53:46.590Z] total_run_count: 300000 00:42:39.546 [2024-12-09T09:53:46.590Z] tsc_hz: 2200000000 (cyc) 00:42:39.546 [2024-12-09T09:53:46.590Z] ====================================== 00:42:39.546 [2024-12-09T09:53:46.590Z] poller_cost: 7371 (cyc), 3350 (nsec) 00:42:39.546 00:42:39.546 real 0m1.626s 00:42:39.546 user 0m1.400s 00:42:39.546 sys 0m0.110s 00:42:39.546 09:53:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:39.546 ************************************ 00:42:39.546 END TEST thread_poller_perf 00:42:39.546 ************************************ 00:42:39.546 09:53:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:42:39.546 09:53:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:42:39.546 09:53:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:42:39.546 09:53:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:39.546 09:53:46 thread -- common/autotest_common.sh@10 -- # set +x 00:42:39.546 ************************************ 00:42:39.546 START TEST thread_poller_perf 00:42:39.546 ************************************ 00:42:39.546 09:53:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:42:39.546 [2024-12-09 09:53:46.362284] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:39.546 [2024-12-09 09:53:46.362452] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61054 ] 00:42:39.546 [2024-12-09 09:53:46.549759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.803 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:42:39.803 [2024-12-09 09:53:46.712360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.176 [2024-12-09T09:53:48.220Z] ====================================== 00:42:41.176 [2024-12-09T09:53:48.220Z] busy:2204648552 (cyc) 00:42:41.176 [2024-12-09T09:53:48.220Z] total_run_count: 3357000 00:42:41.176 [2024-12-09T09:53:48.220Z] tsc_hz: 2200000000 (cyc) 00:42:41.176 [2024-12-09T09:53:48.220Z] ====================================== 00:42:41.176 [2024-12-09T09:53:48.220Z] poller_cost: 656 (cyc), 298 (nsec) 00:42:41.176 ************************************ 00:42:41.176 END TEST thread_poller_perf 00:42:41.176 ************************************ 00:42:41.176 00:42:41.176 real 0m1.640s 00:42:41.176 user 0m1.419s 00:42:41.176 sys 0m0.110s 00:42:41.176 09:53:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:41.176 09:53:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:42:41.176 09:53:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:42:41.176 ************************************ 00:42:41.176 END TEST thread 00:42:41.176 ************************************ 00:42:41.176 00:42:41.176 real 0m3.549s 00:42:41.176 user 0m2.967s 00:42:41.176 sys 0m0.353s 00:42:41.176 09:53:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:41.176 09:53:47 thread -- common/autotest_common.sh@10 -- # set +x 00:42:41.176 09:53:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:42:41.176 09:53:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:42:41.176 09:53:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:41.176 09:53:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:41.176 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:42:41.176 ************************************ 00:42:41.176 START TEST app_cmdline 00:42:41.176 ************************************ 00:42:41.176 09:53:48 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:42:41.176 * Looking for test storage... 00:42:41.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:42:41.176 09:53:48 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:41.176 09:53:48 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:42:41.176 09:53:48 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:41.435 09:53:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:41.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.435 --rc genhtml_branch_coverage=1 00:42:41.435 --rc genhtml_function_coverage=1 00:42:41.435 --rc genhtml_legend=1 00:42:41.435 --rc geninfo_all_blocks=1 00:42:41.435 --rc geninfo_unexecuted_blocks=1 00:42:41.435 00:42:41.435 ' 00:42:41.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:41.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.435 --rc genhtml_branch_coverage=1 00:42:41.435 --rc genhtml_function_coverage=1 00:42:41.435 --rc genhtml_legend=1 00:42:41.435 --rc geninfo_all_blocks=1 00:42:41.435 --rc geninfo_unexecuted_blocks=1 00:42:41.435 00:42:41.435 ' 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:41.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.435 --rc genhtml_branch_coverage=1 00:42:41.435 --rc genhtml_function_coverage=1 00:42:41.435 --rc genhtml_legend=1 00:42:41.435 --rc geninfo_all_blocks=1 00:42:41.435 --rc geninfo_unexecuted_blocks=1 00:42:41.435 00:42:41.435 ' 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:41.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.435 --rc genhtml_branch_coverage=1 00:42:41.435 --rc genhtml_function_coverage=1 00:42:41.435 --rc genhtml_legend=1 00:42:41.435 --rc geninfo_all_blocks=1 00:42:41.435 --rc geninfo_unexecuted_blocks=1 00:42:41.435 00:42:41.435 ' 00:42:41.435 09:53:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:42:41.435 09:53:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61143 00:42:41.435 09:53:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61143 00:42:41.435 09:53:48 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61143 ']' 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:41.435 09:53:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:42:41.435 [2024-12-09 09:53:48.365917] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:41.435 [2024-12-09 09:53:48.366327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61143 ] 00:42:41.693 [2024-12-09 09:53:48.553219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.693 [2024-12-09 09:53:48.712821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.627 09:53:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:42.628 09:53:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:42:42.628 09:53:49 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:42:42.886 { 00:42:42.886 "version": "SPDK v25.01-pre git sha1 b71c8b8dd", 00:42:42.886 "fields": { 00:42:42.886 "major": 25, 00:42:42.886 "minor": 1, 00:42:42.886 "patch": 0, 00:42:42.886 "suffix": "-pre", 00:42:42.886 "commit": "b71c8b8dd" 00:42:42.886 } 00:42:42.886 } 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:42:42.886 09:53:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.886 09:53:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:42:42.886 09:53:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:42:42.886 09:53:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.144 09:53:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:42:43.144 09:53:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:42:43.144 09:53:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:43.144 09:53:49 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:42:43.401 request: 00:42:43.401 { 00:42:43.401 "method": "env_dpdk_get_mem_stats", 00:42:43.401 "req_id": 1 00:42:43.401 } 00:42:43.401 Got JSON-RPC error response 00:42:43.401 response: 00:42:43.401 { 00:42:43.401 "code": -32601, 00:42:43.401 "message": "Method not found" 00:42:43.401 } 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:43.401 09:53:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61143 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61143 ']' 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61143 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61143 00:42:43.401 killing process with pid 61143 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61143' 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 61143 00:42:43.401 09:53:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 61143 00:42:45.953 ************************************ 00:42:45.953 END TEST app_cmdline 00:42:45.953 ************************************ 00:42:45.953 00:42:45.953 real 0m4.546s 00:42:45.953 user 0m5.038s 00:42:45.953 sys 0m0.676s 00:42:45.953 09:53:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.953 09:53:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:42:45.953 09:53:52 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:42:45.953 09:53:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:45.953 09:53:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:45.953 09:53:52 -- common/autotest_common.sh@10 -- # set +x 00:42:45.953 ************************************ 00:42:45.953 START TEST version 00:42:45.953 ************************************ 00:42:45.953 09:53:52 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:42:45.953 * Looking for test storage... 00:42:45.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:42:45.953 09:53:52 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:45.953 09:53:52 version -- common/autotest_common.sh@1711 -- # lcov --version 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:45.954 09:53:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:45.954 09:53:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:45.954 09:53:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:45.954 09:53:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:42:45.954 09:53:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:42:45.954 09:53:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:42:45.954 09:53:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:42:45.954 09:53:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:42:45.954 09:53:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:42:45.954 09:53:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:42:45.954 09:53:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:45.954 09:53:52 version -- scripts/common.sh@344 -- # case "$op" in 00:42:45.954 09:53:52 version -- scripts/common.sh@345 -- # : 1 00:42:45.954 09:53:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:45.954 09:53:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:45.954 09:53:52 version -- scripts/common.sh@365 -- # decimal 1 00:42:45.954 09:53:52 version -- scripts/common.sh@353 -- # local d=1 00:42:45.954 09:53:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:45.954 09:53:52 version -- scripts/common.sh@355 -- # echo 1 00:42:45.954 09:53:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:42:45.954 09:53:52 version -- scripts/common.sh@366 -- # decimal 2 00:42:45.954 09:53:52 version -- scripts/common.sh@353 -- # local d=2 00:42:45.954 09:53:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:45.954 09:53:52 version -- scripts/common.sh@355 -- # echo 2 00:42:45.954 09:53:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:42:45.954 09:53:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:45.954 09:53:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:45.954 09:53:52 version -- scripts/common.sh@368 -- # return 0 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.954 --rc genhtml_branch_coverage=1 00:42:45.954 --rc genhtml_function_coverage=1 00:42:45.954 --rc genhtml_legend=1 00:42:45.954 --rc geninfo_all_blocks=1 00:42:45.954 --rc geninfo_unexecuted_blocks=1 00:42:45.954 00:42:45.954 ' 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.954 --rc genhtml_branch_coverage=1 00:42:45.954 --rc genhtml_function_coverage=1 00:42:45.954 --rc genhtml_legend=1 00:42:45.954 --rc geninfo_all_blocks=1 00:42:45.954 --rc geninfo_unexecuted_blocks=1 00:42:45.954 00:42:45.954 ' 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.954 --rc genhtml_branch_coverage=1 00:42:45.954 --rc genhtml_function_coverage=1 00:42:45.954 --rc genhtml_legend=1 00:42:45.954 --rc geninfo_all_blocks=1 00:42:45.954 --rc geninfo_unexecuted_blocks=1 00:42:45.954 00:42:45.954 ' 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.954 --rc genhtml_branch_coverage=1 00:42:45.954 --rc genhtml_function_coverage=1 00:42:45.954 --rc genhtml_legend=1 00:42:45.954 --rc geninfo_all_blocks=1 00:42:45.954 --rc geninfo_unexecuted_blocks=1 00:42:45.954 00:42:45.954 ' 00:42:45.954 09:53:52 version -- app/version.sh@17 -- # get_header_version major 00:42:45.954 09:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # cut -f2 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:42:45.954 09:53:52 version -- app/version.sh@17 -- # major=25 00:42:45.954 09:53:52 version -- app/version.sh@18 -- # get_header_version minor 00:42:45.954 09:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # cut -f2 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:42:45.954 09:53:52 version -- app/version.sh@18 -- # minor=1 00:42:45.954 09:53:52 version -- app/version.sh@19 -- # get_header_version patch 00:42:45.954 09:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # cut -f2 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:42:45.954 09:53:52 version -- app/version.sh@19 -- # patch=0 00:42:45.954 09:53:52 version -- app/version.sh@20 -- # get_header_version suffix 00:42:45.954 09:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # cut -f2 00:42:45.954 09:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:42:45.954 09:53:52 version -- app/version.sh@20 -- # suffix=-pre 00:42:45.954 09:53:52 version -- app/version.sh@22 -- # version=25.1 00:42:45.954 09:53:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:42:45.954 09:53:52 version -- app/version.sh@28 -- # version=25.1rc0 00:42:45.954 09:53:52 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:42:45.954 09:53:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:42:45.954 09:53:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:42:45.954 09:53:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:42:45.954 00:42:45.954 real 0m0.249s 00:42:45.954 user 0m0.173s 00:42:45.954 sys 0m0.110s 00:42:45.954 09:53:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.954 09:53:52 version -- common/autotest_common.sh@10 -- # set +x 00:42:45.954 ************************************ 00:42:45.954 END TEST version 00:42:45.954 ************************************ 00:42:45.954 09:53:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:42:45.954 09:53:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:42:45.954 09:53:52 -- spdk/autotest.sh@194 -- # uname -s 00:42:45.954 09:53:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:42:45.954 09:53:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:42:45.954 09:53:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:42:45.954 09:53:52 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:42:45.954 09:53:52 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:45.954 09:53:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:45.954 09:53:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:45.954 09:53:52 -- common/autotest_common.sh@10 -- # set +x 00:42:45.954 ************************************ 00:42:45.954 START TEST blockdev_nvme 00:42:45.954 ************************************ 00:42:45.954 09:53:52 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:46.212 * Looking for test storage... 00:42:46.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.212 09:53:53 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.212 --rc genhtml_branch_coverage=1 00:42:46.212 --rc genhtml_function_coverage=1 00:42:46.212 --rc genhtml_legend=1 00:42:46.212 --rc geninfo_all_blocks=1 00:42:46.212 --rc geninfo_unexecuted_blocks=1 00:42:46.212 00:42:46.212 ' 00:42:46.212 09:53:53 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.213 --rc genhtml_branch_coverage=1 00:42:46.213 --rc genhtml_function_coverage=1 00:42:46.213 --rc genhtml_legend=1 00:42:46.213 --rc geninfo_all_blocks=1 00:42:46.213 --rc geninfo_unexecuted_blocks=1 00:42:46.213 00:42:46.213 ' 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.213 --rc genhtml_branch_coverage=1 00:42:46.213 --rc genhtml_function_coverage=1 00:42:46.213 --rc genhtml_legend=1 00:42:46.213 --rc geninfo_all_blocks=1 00:42:46.213 --rc geninfo_unexecuted_blocks=1 00:42:46.213 00:42:46.213 ' 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.213 --rc genhtml_branch_coverage=1 00:42:46.213 --rc genhtml_function_coverage=1 00:42:46.213 --rc genhtml_legend=1 00:42:46.213 --rc geninfo_all_blocks=1 00:42:46.213 --rc geninfo_unexecuted_blocks=1 00:42:46.213 00:42:46.213 ' 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:46.213 09:53:53 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61336 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:46.213 09:53:53 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61336 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61336 ']' 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:46.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:46.213 09:53:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:46.470 [2024-12-09 09:53:53.275937] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:46.470 [2024-12-09 09:53:53.276114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61336 ] 00:42:46.470 [2024-12-09 09:53:53.467559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.727 [2024-12-09 09:53:53.624641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.659 09:53:54 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:47.659 09:53:54 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:42:47.659 09:53:54 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:42:47.659 09:53:54 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:42:47.659 09:53:54 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:42:47.659 09:53:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:47.659 09:53:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:47.659 09:53:54 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:42:47.659 09:53:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.659 09:53:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.917 09:53:54 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.917 09:53:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:42:47.917 09:53:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.917 09:53:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.917 09:53:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.917 09:53:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:48.174 09:53:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.174 09:53:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:42:48.174 09:53:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:42:48.174 09:53:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:42:48.174 09:53:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.174 09:53:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:48.174 09:53:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.174 09:53:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:42:48.174 09:53:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:42:48.175 09:53:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "75c352e8-f716-4cef-9bb2-2ea662ef8bed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "75c352e8-f716-4cef-9bb2-2ea662ef8bed",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9b4c6ad9-874a-4515-a27b-f557e02515ec"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9b4c6ad9-874a-4515-a27b-f557e02515ec",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "371c8b1c-59b8-4257-93b9-d97f8ec4eaef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "371c8b1c-59b8-4257-93b9-d97f8ec4eaef",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9f294892-71e7-4344-9a20-540568d3e701"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f294892-71e7-4344-9a20-540568d3e701",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "28993095-2bdd-4295-a4f5-24f622369ca5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "28993095-2bdd-4295-a4f5-24f622369ca5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f7f5d61d-a1a2-4c38-b592-097e080343ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f7f5d61d-a1a2-4c38-b592-097e080343ff",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:42:48.175 09:53:55 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:42:48.175 09:53:55 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:42:48.175 09:53:55 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:42:48.175 09:53:55 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61336 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61336 ']' 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61336 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61336 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:48.175 killing process with pid 61336 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61336' 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61336 00:42:48.175 09:53:55 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61336 00:42:50.702 09:53:57 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:50.702 09:53:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:42:50.702 09:53:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:50.702 09:53:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:50.702 09:53:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:50.702 ************************************ 00:42:50.702 START TEST bdev_hello_world 00:42:50.702 ************************************ 00:42:50.702 09:53:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:42:50.702 [2024-12-09 09:53:57.495979] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:50.702 [2024-12-09 09:53:57.496163] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:42:50.702 [2024-12-09 09:53:57.681410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.961 [2024-12-09 09:53:57.809693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.528 [2024-12-09 09:53:58.485743] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:51.528 [2024-12-09 09:53:58.485827] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:42:51.528 [2024-12-09 09:53:58.485856] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:51.528 [2024-12-09 09:53:58.489045] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:51.528 [2024-12-09 09:53:58.489586] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:51.528 [2024-12-09 09:53:58.489630] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:51.528 [2024-12-09 09:53:58.489924] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:51.528 00:42:51.528 [2024-12-09 09:53:58.489972] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:52.902 00:42:52.902 real 0m2.187s 00:42:52.902 user 0m1.778s 00:42:52.902 sys 0m0.296s 00:42:52.902 09:53:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:52.902 09:53:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:52.902 ************************************ 00:42:52.902 END TEST bdev_hello_world 00:42:52.902 ************************************ 00:42:52.902 09:53:59 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:42:52.902 09:53:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:52.902 09:53:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:52.902 09:53:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:52.902 ************************************ 00:42:52.902 START TEST bdev_bounds 00:42:52.902 ************************************ 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61475 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:52.902 Process bdevio pid: 61475 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61475' 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61475 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61475 ']' 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:52.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:52.902 09:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:52.902 [2024-12-09 09:53:59.748263] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:52.902 [2024-12-09 09:53:59.748504] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61475 ] 00:42:52.902 [2024-12-09 09:53:59.933648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:53.160 [2024-12-09 09:54:00.068113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:53.160 [2024-12-09 09:54:00.068236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.160 [2024-12-09 09:54:00.068248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:53.727 09:54:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:53.727 09:54:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:42:53.727 09:54:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:54.016 I/O targets: 00:42:54.016 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:42:54.016 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:42:54.016 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:42:54.016 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:42:54.016 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:42:54.016 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:42:54.016 00:42:54.016 00:42:54.016 CUnit - A unit testing framework for C - Version 2.1-3 00:42:54.016 http://cunit.sourceforge.net/ 00:42:54.016 00:42:54.016 00:42:54.016 Suite: bdevio tests on: Nvme3n1 00:42:54.016 Test: blockdev write read block ...passed 00:42:54.016 Test: blockdev write zeroes read block ...passed 00:42:54.016 Test: blockdev write zeroes read no split ...passed 00:42:54.016 Test: blockdev write zeroes read split ...passed 00:42:54.016 Test: blockdev write zeroes read split partial ...passed 00:42:54.016 Test: blockdev reset ...[2024-12-09 09:54:00.952514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:42:54.016 passed 00:42:54.016 Test: blockdev write read 8 blocks ...[2024-12-09 09:54:00.956634] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:42:54.016 passed 00:42:54.016 Test: blockdev write read size > 128k ...passed 00:42:54.016 Test: blockdev write read invalid size ...passed 00:42:54.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:54.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:54.016 Test: blockdev write read max offset ...passed 00:42:54.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:54.016 Test: blockdev writev readv 8 blocks ...passed 00:42:54.016 Test: blockdev writev readv 30 x 1block ...passed 00:42:54.016 Test: blockdev writev readv block ...passed 00:42:54.016 Test: blockdev writev readv size > 128k ...passed 00:42:54.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:54.016 Test: blockdev comparev and writev ...[2024-12-09 09:54:00.965104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c280a000 len:0x1000 00:42:54.016 [2024-12-09 09:54:00.965165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:54.016 passed 00:42:54.016 Test: blockdev nvme passthru rw ...passed 00:42:54.016 Test: blockdev nvme passthru vendor specific ...passed 00:42:54.016 Test: blockdev nvme admin passthru ...[2024-12-09 09:54:00.966082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:42:54.016 [2024-12-09 09:54:00.966183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:54.016 passed 00:42:54.016 Test: blockdev copy ...passed 00:42:54.016 Suite: bdevio tests on: Nvme2n3 00:42:54.016 Test: blockdev write read block ...passed 00:42:54.016 Test: blockdev write zeroes read block ...passed 00:42:54.016 Test: blockdev write zeroes read no split ...passed 00:42:54.016 Test: blockdev write zeroes read split ...passed 00:42:54.275 Test: blockdev write zeroes read split partial ...passed 00:42:54.275 Test: blockdev reset ...[2024-12-09 09:54:01.050847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:42:54.275 passed 00:42:54.275 Test: blockdev write read 8 blocks ...[2024-12-09 09:54:01.055518] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:42:54.275 passed 00:42:54.275 Test: blockdev write read size > 128k ...passed 00:42:54.275 Test: blockdev write read invalid size ...passed 00:42:54.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:54.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:54.275 Test: blockdev write read max offset ...passed 00:42:54.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:54.275 Test: blockdev writev readv 8 blocks ...passed 00:42:54.275 Test: blockdev writev readv 30 x 1block ...passed 00:42:54.275 Test: blockdev writev readv block ...passed 00:42:54.275 Test: blockdev writev readv size > 128k ...passed 00:42:54.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:54.275 Test: blockdev comparev and writev ...[2024-12-09 09:54:01.064183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a5a06000 len:0x1000 00:42:54.275 [2024-12-09 09:54:01.064273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:54.275 passed 00:42:54.275 Test: blockdev nvme passthru rw ...passed 00:42:54.275 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:54:01.065106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:42:54.275 [2024-12-09 09:54:01.065150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:54.275 passed 00:42:54.275 Test: blockdev nvme admin passthru ...passed 00:42:54.275 Test: blockdev copy ...passed 00:42:54.275 Suite: bdevio tests on: Nvme2n2 00:42:54.275 Test: blockdev write read block ...passed 00:42:54.275 Test: blockdev write zeroes read block ...passed 00:42:54.275 Test: blockdev write zeroes read no split ...passed 00:42:54.275 Test: blockdev write zeroes read split ...passed 00:42:54.275 Test: blockdev write zeroes read split partial ...passed 00:42:54.275 Test: blockdev reset ...[2024-12-09 09:54:01.137216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:42:54.275 [2024-12-09 09:54:01.141350] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:42:54.275 passed 00:42:54.275 Test: blockdev write read 8 blocks ...passed 00:42:54.276 Test: blockdev write read size > 128k ...passed 00:42:54.276 Test: blockdev write read invalid size ...passed 00:42:54.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:54.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:54.276 Test: blockdev write read max offset ...passed 00:42:54.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:54.276 Test: blockdev writev readv 8 blocks ...passed 00:42:54.276 Test: blockdev writev readv 30 x 1block ...passed 00:42:54.276 Test: blockdev writev readv block ...passed 00:42:54.276 Test: blockdev writev readv size > 128k ...passed 00:42:54.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:54.276 Test: blockdev comparev and writev ...[2024-12-09 09:54:01.149924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d283c000 len:0x1000 00:42:54.276 [2024-12-09 09:54:01.149983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:54.276 passed 00:42:54.276 Test: blockdev nvme passthru rw ...passed 00:42:54.276 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:54:01.150819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:42:54.276 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:42:54.276 [2024-12-09 09:54:01.150979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:54.276 passed 00:42:54.276 Test: blockdev copy ...passed 00:42:54.276 Suite: bdevio tests on: Nvme2n1 00:42:54.276 Test: blockdev write read block ...passed 00:42:54.276 Test: blockdev write zeroes read block ...passed 00:42:54.276 Test: blockdev write zeroes read no split ...passed 00:42:54.276 Test: blockdev write zeroes read split ...passed 00:42:54.276 Test: blockdev write zeroes read split partial ...passed 00:42:54.276 Test: blockdev reset ...[2024-12-09 09:54:01.240925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:42:54.276 [2024-12-09 09:54:01.246282] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:42:54.276 passed 00:42:54.276 Test: blockdev write read 8 blocks ...passed 00:42:54.276 Test: blockdev write read size > 128k ...passed 00:42:54.276 Test: blockdev write read invalid size ...passed 00:42:54.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:54.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:54.276 Test: blockdev write read max offset ...passed 00:42:54.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:54.276 Test: blockdev writev readv 8 blocks ...passed 00:42:54.276 Test: blockdev writev readv 30 x 1block ...passed 00:42:54.276 Test: blockdev writev readv block ...passed 00:42:54.276 Test: blockdev writev readv size > 128k ...passed 00:42:54.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:54.276 Test: blockdev comparev and writev ...[2024-12-09 09:54:01.257355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:42:54.276 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d2838000 len:0x1000 00:42:54.276 [2024-12-09 09:54:01.257601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:54.276 passed 00:42:54.276 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:54:01.258741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:42:54.276 passed[2024-12-09 09:54:01.258798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:54.276 00:42:54.276 Test: blockdev nvme admin passthru ...passed 00:42:54.276 Test: blockdev copy ...passed 00:42:54.276 Suite: bdevio tests on: Nvme1n1 00:42:54.276 Test: blockdev write read block ...passed 00:42:54.276 Test: blockdev write zeroes read block ...passed 00:42:54.276 Test: blockdev write zeroes read no split ...passed 00:42:54.276 Test: blockdev write zeroes read split ...passed 00:42:54.533 Test: blockdev write zeroes read split partial ...passed 00:42:54.533 Test: blockdev reset ...[2024-12-09 09:54:01.341784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:42:54.533 [2024-12-09 09:54:01.346625] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:42:54.533 passed 00:42:54.533 Test: blockdev write read 8 blocks ...passed 00:42:54.533 Test: blockdev write read size > 128k ...passed 00:42:54.533 Test: blockdev write read invalid size ...passed 00:42:54.533 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:54.533 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:54.533 Test: blockdev write read max offset ...passed 00:42:54.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:54.533 Test: blockdev writev readv 8 blocks ...passed 00:42:54.533 Test: blockdev writev readv 30 x 1block ...passed 00:42:54.533 Test: blockdev writev readv block ...passed 00:42:54.533 Test: blockdev writev readv size > 128k ...passed 00:42:54.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:54.533 Test: blockdev comparev and writev ...[2024-12-09 09:54:01.359770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:42:54.533 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d2834000 len:0x1000 00:42:54.533 [2024-12-09 09:54:01.360012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:54.533 passed 00:42:54.533 Test: blockdev nvme passthru vendor specific ...passed 00:42:54.533 Test: blockdev nvme admin passthru ...[2024-12-09 09:54:01.361037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:42:54.533 [2024-12-09 09:54:01.361118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:54.533 passed 00:42:54.533 Test: blockdev copy ...passed 00:42:54.533 Suite: bdevio tests on: Nvme0n1 00:42:54.533 Test: blockdev write read block ...passed 00:42:54.533 Test: blockdev write zeroes read block ...passed 00:42:54.533 Test: blockdev write zeroes read no split ...passed 00:42:54.533 Test: blockdev write zeroes read split ...passed 00:42:54.533 Test: blockdev write zeroes read split partial ...passed 00:42:54.533 Test: blockdev reset ...[2024-12-09 09:54:01.436274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:42:54.533 passed 00:42:54.533 Test: blockdev write read 8 blocks ...[2024-12-09 09:54:01.440365] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:42:54.533 passed 00:42:54.533 Test: blockdev write read size > 128k ...passed 00:42:54.533 Test: blockdev write read invalid size ...passed 00:42:54.533 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:54.533 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:54.533 Test: blockdev write read max offset ...passed 00:42:54.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:54.533 Test: blockdev writev readv 8 blocks ...passed 00:42:54.533 Test: blockdev writev readv 30 x 1block ...passed 00:42:54.533 Test: blockdev writev readv block ...passed 00:42:54.533 Test: blockdev writev readv size > 128k ...passed 00:42:54.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:54.533 Test: blockdev comparev and writev ...passed 00:42:54.533 Test: blockdev nvme passthru rw ...[2024-12-09 09:54:01.449053] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:42:54.533 separate metadata which is not supported yet. 00:42:54.533 passed 00:42:54.533 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:54:01.449626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:42:54.533 [2024-12-09 09:54:01.449691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:42:54.533 passed 00:42:54.533 Test: blockdev nvme admin passthru ...passed 00:42:54.533 Test: blockdev copy ...passed 00:42:54.533 00:42:54.533 Run Summary: Type Total Ran Passed Failed Inactive 00:42:54.533 suites 6 6 n/a 0 0 00:42:54.533 tests 138 138 138 0 0 00:42:54.533 asserts 893 893 893 0 n/a 00:42:54.533 00:42:54.533 Elapsed time = 1.537 seconds 00:42:54.533 0 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61475 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61475 ']' 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61475 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61475 00:42:54.533 killing process with pid 61475 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61475' 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61475 00:42:54.533 09:54:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61475 00:42:55.486 09:54:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:42:55.486 00:42:55.486 real 0m2.868s 00:42:55.486 user 0m7.242s 00:42:55.486 sys 0m0.454s 00:42:55.486 09:54:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:55.486 ************************************ 00:42:55.486 END TEST bdev_bounds 00:42:55.486 ************************************ 00:42:55.486 09:54:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:55.744 09:54:02 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:42:55.744 09:54:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:55.744 09:54:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:55.744 09:54:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:55.744 ************************************ 00:42:55.744 START TEST bdev_nbd 00:42:55.744 ************************************ 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61535 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:55.744 09:54:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61535 /var/tmp/spdk-nbd.sock 00:42:55.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:55.745 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61535 ']' 00:42:55.745 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:55.745 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:55.745 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:55.745 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:55.745 09:54:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:55.745 [2024-12-09 09:54:02.671653] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:55.745 [2024-12-09 09:54:02.672093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:56.003 [2024-12-09 09:54:02.865471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.003 [2024-12-09 09:54:03.035187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:56.938 09:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:56.938 09:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:42:56.938 09:54:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:42:56.938 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:56.938 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:42:56.938 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:56.939 09:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.197 1+0 records in 00:42:57.197 1+0 records out 00:42:57.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532261 s, 7.7 MB/s 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:57.197 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.455 1+0 records in 00:42:57.455 1+0 records out 00:42:57.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645135 s, 6.3 MB/s 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:57.455 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:42:57.713 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:42:57.713 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.971 1+0 records in 00:42:57.971 1+0 records out 00:42:57.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504894 s, 8.1 MB/s 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:57.971 09:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:58.236 1+0 records in 00:42:58.236 1+0 records out 00:42:58.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602238 s, 6.8 MB/s 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:58.236 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:58.505 1+0 records in 00:42:58.505 1+0 records out 00:42:58.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597812 s, 6.9 MB/s 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:58.505 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:59.071 1+0 records in 00:42:59.071 1+0 records out 00:42:59.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829565 s, 4.9 MB/s 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:42:59.071 09:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:59.328 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:59.328 { 00:42:59.328 "nbd_device": "/dev/nbd0", 00:42:59.328 "bdev_name": "Nvme0n1" 00:42:59.328 }, 00:42:59.328 { 00:42:59.328 "nbd_device": "/dev/nbd1", 00:42:59.329 "bdev_name": "Nvme1n1" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd2", 00:42:59.329 "bdev_name": "Nvme2n1" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd3", 00:42:59.329 "bdev_name": "Nvme2n2" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd4", 00:42:59.329 "bdev_name": "Nvme2n3" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd5", 00:42:59.329 "bdev_name": "Nvme3n1" 00:42:59.329 } 00:42:59.329 ]' 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd0", 00:42:59.329 "bdev_name": "Nvme0n1" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd1", 00:42:59.329 "bdev_name": "Nvme1n1" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd2", 00:42:59.329 "bdev_name": "Nvme2n1" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd3", 00:42:59.329 "bdev_name": "Nvme2n2" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd4", 00:42:59.329 "bdev_name": "Nvme2n3" 00:42:59.329 }, 00:42:59.329 { 00:42:59.329 "nbd_device": "/dev/nbd5", 00:42:59.329 "bdev_name": "Nvme3n1" 00:42:59.329 } 00:42:59.329 ]' 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:59.329 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:59.588 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:59.846 09:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:43:00.420 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:00.421 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:00.421 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:00.421 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:00.680 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:00.939 09:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:01.198 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:01.199 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:01.199 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:01.199 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:01.457 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:01.458 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:01.716 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:43:01.975 /dev/nbd0 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:01.975 1+0 records in 00:43:01.975 1+0 records out 00:43:01.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413049 s, 9.9 MB/s 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:01.975 09:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:43:02.234 /dev/nbd1 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:02.234 1+0 records in 00:43:02.234 1+0 records out 00:43:02.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472851 s, 8.7 MB/s 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:02.234 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:43:02.493 /dev/nbd10 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:02.493 1+0 records in 00:43:02.493 1+0 records out 00:43:02.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807006 s, 5.1 MB/s 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:02.493 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:43:03.091 /dev/nbd11 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:03.091 1+0 records in 00:43:03.091 1+0 records out 00:43:03.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060286 s, 6.8 MB/s 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:03.091 09:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:43:03.350 /dev/nbd12 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:03.350 1+0 records in 00:43:03.350 1+0 records out 00:43:03.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986586 s, 4.2 MB/s 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:03.350 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:03.351 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:43:03.633 /dev/nbd13 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:03.633 1+0 records in 00:43:03.633 1+0 records out 00:43:03.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083741 s, 4.9 MB/s 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:03.633 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd0", 00:43:03.893 "bdev_name": "Nvme0n1" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd1", 00:43:03.893 "bdev_name": "Nvme1n1" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd10", 00:43:03.893 "bdev_name": "Nvme2n1" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd11", 00:43:03.893 "bdev_name": "Nvme2n2" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd12", 00:43:03.893 "bdev_name": "Nvme2n3" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd13", 00:43:03.893 "bdev_name": "Nvme3n1" 00:43:03.893 } 00:43:03.893 ]' 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd0", 00:43:03.893 "bdev_name": "Nvme0n1" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd1", 00:43:03.893 "bdev_name": "Nvme1n1" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd10", 00:43:03.893 "bdev_name": "Nvme2n1" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd11", 00:43:03.893 "bdev_name": "Nvme2n2" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd12", 00:43:03.893 "bdev_name": "Nvme2n3" 00:43:03.893 }, 00:43:03.893 { 00:43:03.893 "nbd_device": "/dev/nbd13", 00:43:03.893 "bdev_name": "Nvme3n1" 00:43:03.893 } 00:43:03.893 ]' 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:43:03.893 /dev/nbd1 00:43:03.893 /dev/nbd10 00:43:03.893 /dev/nbd11 00:43:03.893 /dev/nbd12 00:43:03.893 /dev/nbd13' 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:43:03.893 /dev/nbd1 00:43:03.893 /dev/nbd10 00:43:03.893 /dev/nbd11 00:43:03.893 /dev/nbd12 00:43:03.893 /dev/nbd13' 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:43:03.893 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:43:03.894 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:03.894 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:03.894 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:03.894 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:03.894 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:04.152 256+0 records in 00:43:04.152 256+0 records out 00:43:04.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00951811 s, 110 MB/s 00:43:04.152 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:04.152 09:54:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:04.152 256+0 records in 00:43:04.152 256+0 records out 00:43:04.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17084 s, 6.1 MB/s 00:43:04.152 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:04.152 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:43:04.410 256+0 records in 00:43:04.410 256+0 records out 00:43:04.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157784 s, 6.6 MB/s 00:43:04.410 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:04.410 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:43:04.410 256+0 records in 00:43:04.410 256+0 records out 00:43:04.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145043 s, 7.2 MB/s 00:43:04.410 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:04.410 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:43:04.668 256+0 records in 00:43:04.668 256+0 records out 00:43:04.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147721 s, 7.1 MB/s 00:43:04.668 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:04.668 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:43:04.668 256+0 records in 00:43:04.668 256+0 records out 00:43:04.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123313 s, 8.5 MB/s 00:43:04.668 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:04.668 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:43:04.926 256+0 records in 00:43:04.926 256+0 records out 00:43:04.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123116 s, 8.5 MB/s 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:04.926 09:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:05.492 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:06.059 09:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:06.317 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:06.575 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:07.142 09:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:07.400 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:43:07.658 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:43:07.658 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:43:07.658 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:43:07.658 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.659 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:43:07.917 09:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:08.175 malloc_lvol_verify 00:43:08.175 09:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:08.745 d58c59c7-6b24-4016-95c2-4255896df15b 00:43:08.745 09:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:09.003 a1c6e4e9-b5c7-4f08-9198-100b04df7252 00:43:09.003 09:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:09.261 /dev/nbd0 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:43:09.261 mke2fs 1.47.0 (5-Feb-2023) 00:43:09.261 Discarding device blocks: 0/4096 done 00:43:09.261 Creating filesystem with 4096 1k blocks and 1024 inodes 00:43:09.261 00:43:09.261 Allocating group tables: 0/1 done 00:43:09.261 Writing inode tables: 0/1 done 00:43:09.261 Creating journal (1024 blocks): done 00:43:09.261 Writing superblocks and filesystem accounting information: 0/1 done 00:43:09.261 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:09.261 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61535 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61535 ']' 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61535 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61535 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:09.519 killing process with pid 61535 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61535' 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61535 00:43:09.519 09:54:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61535 00:43:12.800 09:54:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:12.800 00:43:12.800 real 0m17.014s 00:43:12.800 user 0m22.841s 00:43:12.800 sys 0m5.176s 00:43:12.800 09:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:12.800 ************************************ 00:43:12.800 END TEST bdev_nbd 00:43:12.800 ************************************ 00:43:12.800 09:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:12.800 09:54:19 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:43:12.800 09:54:19 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:43:12.801 skipping fio tests on NVMe due to multi-ns failures. 00:43:12.801 09:54:19 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:43:12.801 09:54:19 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:12.801 09:54:19 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:12.801 09:54:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:43:12.801 09:54:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:12.801 09:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:12.801 ************************************ 00:43:12.801 START TEST bdev_verify 00:43:12.801 ************************************ 00:43:12.801 09:54:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:12.801 [2024-12-09 09:54:19.713409] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:12.801 [2024-12-09 09:54:19.713919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61986 ] 00:43:13.059 [2024-12-09 09:54:19.893019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:13.059 [2024-12-09 09:54:20.026406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:13.059 [2024-12-09 09:54:20.026407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.991 Running I/O for 5 seconds... 00:43:15.931 19712.00 IOPS, 77.00 MiB/s [2024-12-09T09:54:24.348Z] 18528.00 IOPS, 72.38 MiB/s [2024-12-09T09:54:25.281Z] 18240.00 IOPS, 71.25 MiB/s [2024-12-09T09:54:25.946Z] 17920.00 IOPS, 70.00 MiB/s [2024-12-09T09:54:25.946Z] 17958.40 IOPS, 70.15 MiB/s 00:43:18.902 Latency(us) 00:43:18.902 [2024-12-09T09:54:25.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.902 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x0 length 0xbd0bd 00:43:18.902 Nvme0n1 : 5.07 1527.07 5.97 0.00 0.00 83394.05 10545.34 90558.84 00:43:18.902 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:43:18.902 Nvme0n1 : 5.05 1419.98 5.55 0.00 0.00 89722.81 14894.55 101044.60 00:43:18.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x0 length 0xa0000 00:43:18.902 Nvme1n1 : 5.08 1525.45 5.96 0.00 0.00 83312.94 14715.81 85792.58 00:43:18.902 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0xa0000 length 0xa0000 00:43:18.902 Nvme1n1 : 5.08 1423.33 5.56 0.00 0.00 89272.02 9651.67 94371.84 00:43:18.902 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x0 length 0x80000 00:43:18.902 Nvme2n1 : 5.09 1532.77 5.99 0.00 0.00 82976.02 13702.98 76260.07 00:43:18.902 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x80000 length 0x80000 00:43:18.902 Nvme2n1 : 5.08 1422.67 5.56 0.00 0.00 89132.88 10426.18 94371.84 00:43:18.902 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x0 length 0x80000 00:43:18.902 Nvme2n2 : 5.10 1531.54 5.98 0.00 0.00 82857.09 15728.64 81979.58 00:43:18.902 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x80000 length 0x80000 00:43:18.902 Nvme2n2 : 5.10 1430.28 5.59 0.00 0.00 88709.50 14120.03 95801.72 00:43:18.902 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x0 length 0x80000 00:43:18.902 Nvme2n3 : 5.10 1530.96 5.98 0.00 0.00 82727.57 15847.80 87222.46 00:43:18.902 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x80000 length 0x80000 00:43:18.902 Nvme2n3 : 5.10 1429.54 5.58 0.00 0.00 88568.30 15013.70 100567.97 00:43:18.902 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x0 length 0x20000 00:43:18.902 Nvme3n1 : 5.10 1529.99 5.98 0.00 0.00 82598.41 11736.90 91035.46 00:43:18.902 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:18.902 Verification LBA range: start 0x20000 length 0x20000 00:43:18.902 Nvme3n1 : 5.11 1428.56 5.58 0.00 0.00 88431.64 11439.01 101521.22 00:43:18.902 [2024-12-09T09:54:25.946Z] =================================================================================================================== 00:43:18.902 [2024-12-09T09:54:25.946Z] Total : 17732.13 69.27 0.00 0.00 85867.05 9651.67 101521.22 00:43:20.276 ************************************ 00:43:20.276 END TEST bdev_verify 00:43:20.276 ************************************ 00:43:20.276 00:43:20.276 real 0m7.678s 00:43:20.276 user 0m14.147s 00:43:20.276 sys 0m0.308s 00:43:20.276 09:54:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:20.276 09:54:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:20.535 09:54:27 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:20.535 09:54:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:43:20.535 09:54:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:20.535 09:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:20.535 ************************************ 00:43:20.535 START TEST bdev_verify_big_io 00:43:20.535 ************************************ 00:43:20.535 09:54:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:20.535 [2024-12-09 09:54:27.449714] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:20.535 [2024-12-09 09:54:27.449891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62084 ] 00:43:20.793 [2024-12-09 09:54:27.639406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:20.793 [2024-12-09 09:54:27.799115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:20.793 [2024-12-09 09:54:27.799123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:21.728 Running I/O for 5 seconds... 00:43:26.918 678.00 IOPS, 42.38 MiB/s [2024-12-09T09:54:34.897Z] 2572.00 IOPS, 160.75 MiB/s [2024-12-09T09:54:34.897Z] 3119.33 IOPS, 194.96 MiB/s 00:43:27.853 Latency(us) 00:43:27.853 [2024-12-09T09:54:34.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:27.853 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x0 length 0xbd0b 00:43:27.853 Nvme0n1 : 5.76 129.99 8.12 0.00 0.00 949268.32 21209.83 999006.95 00:43:27.853 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0xbd0b length 0xbd0b 00:43:27.853 Nvme0n1 : 5.62 125.51 7.84 0.00 0.00 974716.21 10962.39 1029510.98 00:43:27.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x0 length 0xa000 00:43:27.853 Nvme1n1 : 5.76 129.78 8.11 0.00 0.00 923570.78 49092.42 835047.80 00:43:27.853 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0xa000 length 0xa000 00:43:27.853 Nvme1n1 : 5.75 120.56 7.53 0.00 0.00 971127.11 48854.11 1509949.44 00:43:27.853 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x0 length 0x8000 00:43:27.853 Nvme2n1 : 5.81 132.77 8.30 0.00 0.00 880535.85 107240.73 960876.92 00:43:27.853 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x8000 length 0x8000 00:43:27.853 Nvme2n1 : 5.80 128.85 8.05 0.00 0.00 898604.56 45517.73 1540453.47 00:43:27.853 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x0 length 0x8000 00:43:27.853 Nvme2n2 : 5.77 133.15 8.32 0.00 0.00 856676.23 105334.23 937998.89 00:43:27.853 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x8000 length 0x8000 00:43:27.853 Nvme2n2 : 5.87 136.01 8.50 0.00 0.00 827958.14 23235.49 1555705.48 00:43:27.853 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x0 length 0x8000 00:43:27.853 Nvme2n3 : 5.83 142.74 8.92 0.00 0.00 779693.65 17992.61 884616.84 00:43:27.853 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x8000 length 0x8000 00:43:27.853 Nvme2n3 : 5.89 138.72 8.67 0.00 0.00 785871.51 43611.23 1601461.53 00:43:27.853 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x0 length 0x2000 00:43:27.853 Nvme3n1 : 5.89 155.84 9.74 0.00 0.00 693846.11 4468.36 915120.87 00:43:27.853 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.853 Verification LBA range: start 0x2000 length 0x2000 00:43:27.853 Nvme3n1 : 5.92 163.16 10.20 0.00 0.00 654856.12 588.33 1403185.34 00:43:27.853 [2024-12-09T09:54:34.897Z] =================================================================================================================== 00:43:27.853 [2024-12-09T09:54:34.897Z] Total : 1637.07 102.32 0.00 0.00 840106.34 588.33 1601461.53 00:43:29.252 00:43:29.252 real 0m8.750s 00:43:29.252 user 0m16.235s 00:43:29.252 sys 0m0.327s 00:43:29.252 09:54:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:29.252 ************************************ 00:43:29.252 09:54:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:29.252 END TEST bdev_verify_big_io 00:43:29.252 ************************************ 00:43:29.252 09:54:36 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:29.252 09:54:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:43:29.252 09:54:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:29.252 09:54:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:29.252 ************************************ 00:43:29.252 START TEST bdev_write_zeroes 00:43:29.252 ************************************ 00:43:29.252 09:54:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:29.252 [2024-12-09 09:54:36.241975] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:29.252 [2024-12-09 09:54:36.242167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62199 ] 00:43:29.511 [2024-12-09 09:54:36.420609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.769 [2024-12-09 09:54:36.589097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.336 Running I/O for 1 seconds... 00:43:31.710 48000.00 IOPS, 187.50 MiB/s 00:43:31.710 Latency(us) 00:43:31.710 [2024-12-09T09:54:38.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.710 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.710 Nvme0n1 : 1.03 7980.48 31.17 0.00 0.00 15998.20 11856.06 28120.90 00:43:31.710 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.710 Nvme1n1 : 1.03 7969.59 31.13 0.00 0.00 15995.07 11915.64 26214.40 00:43:31.710 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.710 Nvme2n1 : 1.03 7959.88 31.09 0.00 0.00 15955.98 11736.90 23950.43 00:43:31.710 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.710 Nvme2n2 : 1.03 7950.04 31.05 0.00 0.00 15889.52 12213.53 22878.02 00:43:31.710 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.710 Nvme2n3 : 1.03 7939.05 31.01 0.00 0.00 15865.25 10068.71 23473.80 00:43:31.710 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.710 Nvme3n1 : 1.03 7929.02 30.97 0.00 0.00 15847.34 8698.41 25261.15 00:43:31.710 [2024-12-09T09:54:38.754Z] =================================================================================================================== 00:43:31.710 [2024-12-09T09:54:38.754Z] Total : 47728.06 186.44 0.00 0.00 15925.23 8698.41 28120.90 00:43:32.685 00:43:32.685 real 0m3.357s 00:43:32.685 user 0m2.945s 00:43:32.685 sys 0m0.286s 00:43:32.685 09:54:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:32.685 09:54:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:32.685 ************************************ 00:43:32.685 END TEST bdev_write_zeroes 00:43:32.685 ************************************ 00:43:32.685 09:54:39 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:32.685 09:54:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:43:32.685 09:54:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:32.685 09:54:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:32.685 ************************************ 00:43:32.685 START TEST bdev_json_nonenclosed 00:43:32.685 ************************************ 00:43:32.685 09:54:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:32.685 [2024-12-09 09:54:39.676710] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:32.685 [2024-12-09 09:54:39.676904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62252 ] 00:43:32.943 [2024-12-09 09:54:39.865895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:33.202 [2024-12-09 09:54:40.005900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.202 [2024-12-09 09:54:40.006003] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:33.202 [2024-12-09 09:54:40.006033] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:33.202 [2024-12-09 09:54:40.006055] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:33.461 00:43:33.461 real 0m0.736s 00:43:33.461 user 0m0.478s 00:43:33.461 sys 0m0.151s 00:43:33.461 09:54:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.461 09:54:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:33.461 ************************************ 00:43:33.461 END TEST bdev_json_nonenclosed 00:43:33.461 ************************************ 00:43:33.461 09:54:40 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:33.461 09:54:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:43:33.461 09:54:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:33.461 09:54:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.461 ************************************ 00:43:33.461 START TEST bdev_json_nonarray 00:43:33.461 ************************************ 00:43:33.461 09:54:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:33.461 [2024-12-09 09:54:40.456517] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:33.461 [2024-12-09 09:54:40.456699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62283 ] 00:43:33.721 [2024-12-09 09:54:40.640216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:33.979 [2024-12-09 09:54:40.775846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.979 [2024-12-09 09:54:40.776052] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:33.979 [2024-12-09 09:54:40.776082] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:33.979 [2024-12-09 09:54:40.776097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:34.237 00:43:34.237 real 0m0.711s 00:43:34.237 user 0m0.460s 00:43:34.237 sys 0m0.144s 00:43:34.237 09:54:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:34.237 09:54:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:34.237 ************************************ 00:43:34.237 END TEST bdev_json_nonarray 00:43:34.237 ************************************ 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:43:34.237 09:54:41 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:43:34.237 00:43:34.237 real 0m48.156s 00:43:34.237 user 1m10.817s 00:43:34.237 sys 0m8.134s 00:43:34.237 09:54:41 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:34.237 09:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:34.237 ************************************ 00:43:34.237 END TEST blockdev_nvme 00:43:34.237 ************************************ 00:43:34.237 09:54:41 -- spdk/autotest.sh@209 -- # uname -s 00:43:34.237 09:54:41 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:43:34.237 09:54:41 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:43:34.237 09:54:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:34.237 09:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:34.237 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:43:34.237 ************************************ 00:43:34.237 START TEST blockdev_nvme_gpt 00:43:34.237 ************************************ 00:43:34.237 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:43:34.237 * Looking for test storage... 00:43:34.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:34.237 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:34.237 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:34.237 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:34.496 09:54:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.496 --rc genhtml_branch_coverage=1 00:43:34.496 --rc genhtml_function_coverage=1 00:43:34.496 --rc genhtml_legend=1 00:43:34.496 --rc geninfo_all_blocks=1 00:43:34.496 --rc geninfo_unexecuted_blocks=1 00:43:34.496 00:43:34.496 ' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.496 --rc genhtml_branch_coverage=1 00:43:34.496 --rc genhtml_function_coverage=1 00:43:34.496 --rc genhtml_legend=1 00:43:34.496 --rc geninfo_all_blocks=1 00:43:34.496 --rc geninfo_unexecuted_blocks=1 00:43:34.496 00:43:34.496 ' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.496 --rc genhtml_branch_coverage=1 00:43:34.496 --rc genhtml_function_coverage=1 00:43:34.496 --rc genhtml_legend=1 00:43:34.496 --rc geninfo_all_blocks=1 00:43:34.496 --rc geninfo_unexecuted_blocks=1 00:43:34.496 00:43:34.496 ' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.496 --rc genhtml_branch_coverage=1 00:43:34.496 --rc genhtml_function_coverage=1 00:43:34.496 --rc genhtml_legend=1 00:43:34.496 --rc geninfo_all_blocks=1 00:43:34.496 --rc geninfo_unexecuted_blocks=1 00:43:34.496 00:43:34.496 ' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62367 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62367 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62367 ']' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:34.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:34.496 09:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:34.496 [2024-12-09 09:54:41.482229] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:34.496 [2024-12-09 09:54:41.482991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62367 ] 00:43:34.755 [2024-12-09 09:54:41.673272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.013 [2024-12-09 09:54:41.834646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:35.945 09:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:35.945 09:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:43:35.945 09:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:43:35.945 09:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:43:35.945 09:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:43:36.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:36.460 Waiting for block devices as requested 00:43:36.460 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:43:36.460 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:36.460 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:43:36.718 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:43:41.980 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:43:41.980 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:43:41.980 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:43:41.981 BYT; 00:43:41.981 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:43:41.981 BYT; 00:43:41.981 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:41.981 09:54:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:41.981 09:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:43:42.917 The operation has completed successfully. 00:43:42.917 09:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:43:43.853 The operation has completed successfully. 00:43:43.853 09:54:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:44.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:44.986 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:43:44.986 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:43:44.986 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:43:45.245 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:43:45.245 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:43:45.245 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.245 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.245 [] 00:43:45.245 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.245 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:43:45.245 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:43:45.245 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:43:45.245 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:45.245 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:43:45.245 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.245 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.517 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:45.517 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:43:45.817 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.817 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:43:45.817 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:43:45.818 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8b4f0bbd-f163-4b40-a9a4-ce5bd7a48b80"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8b4f0bbd-f163-4b40-a9a4-ce5bd7a48b80",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7b36c036-123a-4357-bd4d-9219b49de54b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7b36c036-123a-4357-bd4d-9219b49de54b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bbb3308f-7eee-4f05-b24e-4f311d8141c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bbb3308f-7eee-4f05-b24e-4f311d8141c0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8d207f74-3964-4faf-a592-aba1a72652e2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d207f74-3964-4faf-a592-aba1a72652e2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1acf458f-7728-4198-a04a-1036c9a4366d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1acf458f-7728-4198-a04a-1036c9a4366d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:43:45.818 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:43:45.818 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:43:45.818 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:43:45.818 09:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62367 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62367 ']' 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62367 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62367 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:45.818 killing process with pid 62367 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62367' 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62367 00:43:45.818 09:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62367 00:43:48.347 09:54:54 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:48.347 09:54:54 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:43:48.347 09:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:48.347 09:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:48.347 09:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:48.347 ************************************ 00:43:48.347 START TEST bdev_hello_world 00:43:48.347 ************************************ 00:43:48.347 09:54:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:43:48.347 [2024-12-09 09:54:55.023136] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:48.347 [2024-12-09 09:54:55.023497] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63002 ] 00:43:48.347 [2024-12-09 09:54:55.196476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:48.347 [2024-12-09 09:54:55.325072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:49.281 [2024-12-09 09:54:55.995492] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:49.281 [2024-12-09 09:54:55.995557] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:43:49.281 [2024-12-09 09:54:55.995600] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:49.281 [2024-12-09 09:54:55.999189] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:49.281 [2024-12-09 09:54:55.999810] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:49.281 [2024-12-09 09:54:55.999856] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:49.281 [2024-12-09 09:54:56.000034] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:49.281 00:43:49.281 [2024-12-09 09:54:56.000069] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:50.217 00:43:50.217 real 0m2.193s 00:43:50.217 user 0m1.796s 00:43:50.217 sys 0m0.285s 00:43:50.217 ************************************ 00:43:50.217 END TEST bdev_hello_world 00:43:50.217 ************************************ 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:50.217 09:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:43:50.217 09:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:50.217 09:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:50.217 09:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:50.217 ************************************ 00:43:50.217 START TEST bdev_bounds 00:43:50.217 ************************************ 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63045 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:50.217 Process bdevio pid: 63045 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63045' 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63045 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63045 ']' 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:50.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:50.217 09:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:50.476 [2024-12-09 09:54:57.281434] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:50.476 [2024-12-09 09:54:57.281633] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63045 ] 00:43:50.476 [2024-12-09 09:54:57.471022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:50.734 [2024-12-09 09:54:57.638718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:50.734 [2024-12-09 09:54:57.638900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.734 [2024-12-09 09:54:57.638964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:51.300 09:54:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:51.300 09:54:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:43:51.300 09:54:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:51.558 I/O targets: 00:43:51.558 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:43:51.558 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:43:51.558 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:43:51.558 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:43:51.558 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:43:51.558 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:43:51.558 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:43:51.558 00:43:51.558 00:43:51.558 CUnit - A unit testing framework for C - Version 2.1-3 00:43:51.558 http://cunit.sourceforge.net/ 00:43:51.558 00:43:51.558 00:43:51.558 Suite: bdevio tests on: Nvme3n1 00:43:51.558 Test: blockdev write read block ...passed 00:43:51.558 Test: blockdev write zeroes read block ...passed 00:43:51.558 Test: blockdev write zeroes read no split ...passed 00:43:51.558 Test: blockdev write zeroes read split ...passed 00:43:51.558 Test: blockdev write zeroes read split partial ...passed 00:43:51.558 Test: blockdev reset ...[2024-12-09 09:54:58.565105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:43:51.558 [2024-12-09 09:54:58.570917] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:43:51.558 Test: blockdev write read 8 blocks ...uccessful. 00:43:51.558 passed 00:43:51.558 Test: blockdev write read size > 128k ...passed 00:43:51.558 Test: blockdev write read invalid size ...passed 00:43:51.558 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:51.558 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:51.558 Test: blockdev write read max offset ...passed 00:43:51.558 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:51.558 Test: blockdev writev readv 8 blocks ...passed 00:43:51.558 Test: blockdev writev readv 30 x 1block ...passed 00:43:51.558 Test: blockdev writev readv block ...passed 00:43:51.558 Test: blockdev writev readv size > 128k ...passed 00:43:51.558 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:51.558 Test: blockdev comparev and writev ...[2024-12-09 09:54:58.581895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0004000 len:0x1000 00:43:51.558 [2024-12-09 09:54:58.581986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:51.558 passed 00:43:51.558 Test: blockdev nvme passthru rw ...passed 00:43:51.558 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:54:58.583014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:43:51.558 Test: blockdev nvme admin passthru ...RP2 0x0 00:43:51.558 [2024-12-09 09:54:58.583280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:43:51.558 passed 00:43:51.558 Test: blockdev copy ...passed 00:43:51.558 Suite: bdevio tests on: Nvme2n3 00:43:51.558 Test: blockdev write read block ...passed 00:43:51.558 Test: blockdev write zeroes read block ...passed 00:43:51.558 Test: blockdev write zeroes read no split ...passed 00:43:51.816 Test: blockdev write zeroes read split ...passed 00:43:51.816 Test: blockdev write zeroes read split partial ...passed 00:43:51.816 Test: blockdev reset ...[2024-12-09 09:54:58.659479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:43:51.816 [2024-12-09 09:54:58.664974] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:43:51.816 passed 00:43:51.816 Test: blockdev write read 8 blocks ...passed 00:43:51.816 Test: blockdev write read size > 128k ...passed 00:43:51.816 Test: blockdev write read invalid size ...passed 00:43:51.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:51.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:51.816 Test: blockdev write read max offset ...passed 00:43:51.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:51.816 Test: blockdev writev readv 8 blocks ...passed 00:43:51.816 Test: blockdev writev readv 30 x 1block ...passed 00:43:51.816 Test: blockdev writev readv block ...passed 00:43:51.816 Test: blockdev writev readv size > 128k ...passed 00:43:51.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:51.816 Test: blockdev comparev and writev ...[2024-12-09 09:54:58.674555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0002000 len:0x1000 00:43:51.816 [2024-12-09 09:54:58.674632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:51.816 passed 00:43:51.816 Test: blockdev nvme passthru rw ...passed 00:43:51.816 Test: blockdev nvme passthru vendor specific ...passed 00:43:51.816 Test: blockdev nvme admin passthru ...[2024-12-09 09:54:58.675475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:43:51.816 [2024-12-09 09:54:58.675557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:43:51.816 passed 00:43:51.816 Test: blockdev copy ...passed 00:43:51.817 Suite: bdevio tests on: Nvme2n2 00:43:51.817 Test: blockdev write read block ...passed 00:43:51.817 Test: blockdev write zeroes read block ...passed 00:43:51.817 Test: blockdev write zeroes read no split ...passed 00:43:51.817 Test: blockdev write zeroes read split ...passed 00:43:51.817 Test: blockdev write zeroes read split partial ...passed 00:43:51.817 Test: blockdev reset ...[2024-12-09 09:54:58.754152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:43:51.817 [2024-12-09 09:54:58.759923] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:43:51.817 passed 00:43:51.817 Test: blockdev write read 8 blocks ...passed 00:43:51.817 Test: blockdev write read size > 128k ...passed 00:43:51.817 Test: blockdev write read invalid size ...passed 00:43:51.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:51.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:51.817 Test: blockdev write read max offset ...passed 00:43:51.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:51.817 Test: blockdev writev readv 8 blocks ...passed 00:43:51.817 Test: blockdev writev readv 30 x 1block ...passed 00:43:51.817 Test: blockdev writev readv block ...passed 00:43:51.817 Test: blockdev writev readv size > 128k ...passed 00:43:51.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:51.817 Test: blockdev comparev and writev ...[2024-12-09 09:54:58.770274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4638000 len:0x1000 00:43:51.817 [2024-12-09 09:54:58.770351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:51.817 passed 00:43:51.817 Test: blockdev nvme passthru rw ...passed 00:43:51.817 Test: blockdev nvme passthru vendor specific ...passed 00:43:51.817 Test: blockdev nvme admin passthru ...[2024-12-09 09:54:58.771312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:43:51.817 [2024-12-09 09:54:58.771373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:43:51.817 passed 00:43:51.817 Test: blockdev copy ...passed 00:43:51.817 Suite: bdevio tests on: Nvme2n1 00:43:51.817 Test: blockdev write read block ...passed 00:43:51.817 Test: blockdev write zeroes read block ...passed 00:43:51.817 Test: blockdev write zeroes read no split ...passed 00:43:51.817 Test: blockdev write zeroes read split ...passed 00:43:51.817 Test: blockdev write zeroes read split partial ...passed 00:43:51.817 Test: blockdev reset ...[2024-12-09 09:54:58.836183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:43:51.817 [2024-12-09 09:54:58.841362] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:43:51.817 Test: blockdev write read 8 blocks ...uccessful. 00:43:51.817 passed 00:43:51.817 Test: blockdev write read size > 128k ...passed 00:43:51.817 Test: blockdev write read invalid size ...passed 00:43:51.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:51.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:51.817 Test: blockdev write read max offset ...passed 00:43:51.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:51.817 Test: blockdev writev readv 8 blocks ...passed 00:43:51.817 Test: blockdev writev readv 30 x 1block ...passed 00:43:51.817 Test: blockdev writev readv block ...passed 00:43:51.817 Test: blockdev writev readv size > 128k ...passed 00:43:51.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:51.817 Test: blockdev comparev and writev ...[2024-12-09 09:54:58.849619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:43:51.817 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d4634000 len:0x1000 00:43:51.817 [2024-12-09 09:54:58.849886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:51.817 passed 00:43:51.817 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:54:58.850682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:43:51.817 passed 00:43:51.817 Test: blockdev nvme admin passthru ...[2024-12-09 09:54:58.850749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:43:51.817 passed 00:43:51.817 Test: blockdev copy ...passed 00:43:51.817 Suite: bdevio tests on: Nvme1n1p2 00:43:51.817 Test: blockdev write read block ...passed 00:43:52.075 Test: blockdev write zeroes read block ...passed 00:43:52.075 Test: blockdev write zeroes read no split ...passed 00:43:52.075 Test: blockdev write zeroes read split ...passed 00:43:52.075 Test: blockdev write zeroes read split partial ...passed 00:43:52.075 Test: blockdev reset ...[2024-12-09 09:54:58.922102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:43:52.075 [2024-12-09 09:54:58.926722] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:43:52.075 passed 00:43:52.075 Test: blockdev write read 8 blocks ...passed 00:43:52.075 Test: blockdev write read size > 128k ...passed 00:43:52.075 Test: blockdev write read invalid size ...passed 00:43:52.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:52.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:52.075 Test: blockdev write read max offset ...passed 00:43:52.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:52.075 Test: blockdev writev readv 8 blocks ...passed 00:43:52.075 Test: blockdev writev readv 30 x 1block ...passed 00:43:52.075 Test: blockdev writev readv block ...passed 00:43:52.075 Test: blockdev writev readv size > 128k ...passed 00:43:52.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:52.075 Test: blockdev comparev and writev ...[2024-12-09 09:54:58.935381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4630000 len:0x1000 00:43:52.075 [2024-12-09 09:54:58.935460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:52.075 passed 00:43:52.075 Test: blockdev nvme passthru rw ...passed 00:43:52.075 Test: blockdev nvme passthru vendor specific ...passed 00:43:52.075 Test: blockdev nvme admin passthru ...passed 00:43:52.075 Test: blockdev copy ...passed 00:43:52.075 Suite: bdevio tests on: Nvme1n1p1 00:43:52.075 Test: blockdev write read block ...passed 00:43:52.075 Test: blockdev write zeroes read block ...passed 00:43:52.075 Test: blockdev write zeroes read no split ...passed 00:43:52.075 Test: blockdev write zeroes read split ...passed 00:43:52.075 Test: blockdev write zeroes read split partial ...passed 00:43:52.075 Test: blockdev reset ...[2024-12-09 09:54:58.994939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:43:52.075 passed 00:43:52.075 Test: blockdev write read 8 blocks ...[2024-12-09 09:54:59.000280] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:43:52.075 passed 00:43:52.075 Test: blockdev write read size > 128k ...passed 00:43:52.075 Test: blockdev write read invalid size ...passed 00:43:52.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:52.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:52.075 Test: blockdev write read max offset ...passed 00:43:52.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:52.075 Test: blockdev writev readv 8 blocks ...passed 00:43:52.075 Test: blockdev writev readv 30 x 1block ...passed 00:43:52.075 Test: blockdev writev readv block ...passed 00:43:52.075 Test: blockdev writev readv size > 128k ...passed 00:43:52.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:52.075 Test: blockdev comparev and writev ...[2024-12-09 09:54:59.009999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c0a0e000 len:0x1000 00:43:52.075 [2024-12-09 09:54:59.010083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:52.075 passed 00:43:52.075 Test: blockdev nvme passthru rw ...passed 00:43:52.075 Test: blockdev nvme passthru vendor specific ...passed 00:43:52.075 Test: blockdev nvme admin passthru ...passed 00:43:52.075 Test: blockdev copy ...passed 00:43:52.075 Suite: bdevio tests on: Nvme0n1 00:43:52.075 Test: blockdev write read block ...passed 00:43:52.075 Test: blockdev write zeroes read block ...passed 00:43:52.075 Test: blockdev write zeroes read no split ...passed 00:43:52.075 Test: blockdev write zeroes read split ...passed 00:43:52.075 Test: blockdev write zeroes read split partial ...passed 00:43:52.075 Test: blockdev reset ...[2024-12-09 09:54:59.084435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:43:52.075 [2024-12-09 09:54:59.089360] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:43:52.075 passed 00:43:52.075 Test: blockdev write read 8 blocks ...passed 00:43:52.075 Test: blockdev write read size > 128k ...passed 00:43:52.075 Test: blockdev write read invalid size ...passed 00:43:52.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:52.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:52.075 Test: blockdev write read max offset ...passed 00:43:52.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:52.075 Test: blockdev writev readv 8 blocks ...passed 00:43:52.075 Test: blockdev writev readv 30 x 1block ...passed 00:43:52.075 Test: blockdev writev readv block ...passed 00:43:52.075 Test: blockdev writev readv size > 128k ...passed 00:43:52.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:52.075 Test: blockdev comparev and writev ...passed 00:43:52.075 Test: blockdev nvme passthru rw ...[2024-12-09 09:54:59.097635] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:43:52.075 separate metadata which is not supported yet. 00:43:52.075 passed 00:43:52.075 Test: blockdev nvme passthru vendor specific ...passed 00:43:52.075 Test: blockdev nvme admin passthru ...[2024-12-09 09:54:59.098190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:43:52.075 [2024-12-09 09:54:59.098288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:43:52.075 passed 00:43:52.075 Test: blockdev copy ...passed 00:43:52.075 00:43:52.075 Run Summary: Type Total Ran Passed Failed Inactive 00:43:52.075 suites 7 7 n/a 0 0 00:43:52.075 tests 161 161 161 0 0 00:43:52.075 asserts 1025 1025 1025 0 n/a 00:43:52.075 00:43:52.075 Elapsed time = 1.677 seconds 00:43:52.075 0 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63045 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63045 ']' 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63045 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63045 00:43:52.334 killing process with pid 63045 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63045' 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63045 00:43:52.334 09:54:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63045 00:43:53.269 ************************************ 00:43:53.269 END TEST bdev_bounds 00:43:53.269 ************************************ 00:43:53.269 09:55:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:53.269 00:43:53.269 real 0m3.091s 00:43:53.269 user 0m7.946s 00:43:53.269 sys 0m0.472s 00:43:53.269 09:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:53.269 09:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:53.269 09:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:43:53.269 09:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:53.269 09:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:53.269 09:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:53.528 ************************************ 00:43:53.528 START TEST bdev_nbd 00:43:53.528 ************************************ 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63110 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63110 /var/tmp/spdk-nbd.sock 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63110 ']' 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:53.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:53.528 09:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:53.528 [2024-12-09 09:55:00.430405] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:53.528 [2024-12-09 09:55:00.430813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:53.786 [2024-12-09 09:55:00.616033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:53.786 [2024-12-09 09:55:00.756387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:54.722 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:54.980 1+0 records in 00:43:54.980 1+0 records out 00:43:54.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474939 s, 8.6 MB/s 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:54.980 09:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:54.981 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:54.981 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:54.981 09:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:55.239 1+0 records in 00:43:55.239 1+0 records out 00:43:55.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074549 s, 5.5 MB/s 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:55.239 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:55.498 1+0 records in 00:43:55.498 1+0 records out 00:43:55.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790204 s, 5.2 MB/s 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:55.498 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:55.832 1+0 records in 00:43:55.832 1+0 records out 00:43:55.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102402 s, 4.0 MB/s 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:55.832 09:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:56.090 1+0 records in 00:43:56.090 1+0 records out 00:43:56.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896897 s, 4.6 MB/s 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:56.090 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:56.657 1+0 records in 00:43:56.657 1+0 records out 00:43:56.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946991 s, 4.3 MB/s 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:56.657 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:56.916 1+0 records in 00:43:56.916 1+0 records out 00:43:56.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994188 s, 4.1 MB/s 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:43:56.916 09:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd0", 00:43:57.175 "bdev_name": "Nvme0n1" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd1", 00:43:57.175 "bdev_name": "Nvme1n1p1" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd2", 00:43:57.175 "bdev_name": "Nvme1n1p2" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd3", 00:43:57.175 "bdev_name": "Nvme2n1" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd4", 00:43:57.175 "bdev_name": "Nvme2n2" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd5", 00:43:57.175 "bdev_name": "Nvme2n3" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd6", 00:43:57.175 "bdev_name": "Nvme3n1" 00:43:57.175 } 00:43:57.175 ]' 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd0", 00:43:57.175 "bdev_name": "Nvme0n1" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd1", 00:43:57.175 "bdev_name": "Nvme1n1p1" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd2", 00:43:57.175 "bdev_name": "Nvme1n1p2" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd3", 00:43:57.175 "bdev_name": "Nvme2n1" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd4", 00:43:57.175 "bdev_name": "Nvme2n2" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd5", 00:43:57.175 "bdev_name": "Nvme2n3" 00:43:57.175 }, 00:43:57.175 { 00:43:57.175 "nbd_device": "/dev/nbd6", 00:43:57.175 "bdev_name": "Nvme3n1" 00:43:57.175 } 00:43:57.175 ]' 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:57.175 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:57.433 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:57.691 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:57.691 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:57.691 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:57.691 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:57.691 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:57.691 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:57.948 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:57.948 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:57.948 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:57.949 09:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:58.208 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:58.467 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:58.726 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:58.984 09:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:59.243 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:43:59.810 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:44:00.069 /dev/nbd0 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:00.069 1+0 records in 00:44:00.069 1+0 records out 00:44:00.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511211 s, 8.0 MB/s 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:00.069 09:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:44:00.346 /dev/nbd1 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:00.346 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:00.346 1+0 records in 00:44:00.346 1+0 records out 00:44:00.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591408 s, 6.9 MB/s 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:00.347 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:44:00.605 /dev/nbd10 00:44:00.605 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:44:00.605 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:44:00.605 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:44:00.605 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:00.605 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:00.605 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:00.863 1+0 records in 00:44:00.863 1+0 records out 00:44:00.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690756 s, 5.9 MB/s 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:00.863 09:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:00.864 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:00.864 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:00.864 09:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:44:01.122 /dev/nbd11 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:01.122 1+0 records in 00:44:01.122 1+0 records out 00:44:01.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829772 s, 4.9 MB/s 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:01.122 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:44:01.380 /dev/nbd12 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:01.380 1+0 records in 00:44:01.380 1+0 records out 00:44:01.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634605 s, 6.5 MB/s 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:01.380 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:44:01.638 /dev/nbd13 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:01.638 1+0 records in 00:44:01.638 1+0 records out 00:44:01.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610818 s, 6.7 MB/s 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:01.638 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:44:01.928 /dev/nbd14 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:01.928 1+0 records in 00:44:01.928 1+0 records out 00:44:01.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106348 s, 3.9 MB/s 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:01.928 09:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:02.186 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd0", 00:44:02.186 "bdev_name": "Nvme0n1" 00:44:02.186 }, 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd1", 00:44:02.186 "bdev_name": "Nvme1n1p1" 00:44:02.186 }, 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd10", 00:44:02.186 "bdev_name": "Nvme1n1p2" 00:44:02.186 }, 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd11", 00:44:02.186 "bdev_name": "Nvme2n1" 00:44:02.186 }, 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd12", 00:44:02.186 "bdev_name": "Nvme2n2" 00:44:02.186 }, 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd13", 00:44:02.186 "bdev_name": "Nvme2n3" 00:44:02.186 }, 00:44:02.186 { 00:44:02.186 "nbd_device": "/dev/nbd14", 00:44:02.186 "bdev_name": "Nvme3n1" 00:44:02.186 } 00:44:02.186 ]' 00:44:02.187 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd0", 00:44:02.187 "bdev_name": "Nvme0n1" 00:44:02.187 }, 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd1", 00:44:02.187 "bdev_name": "Nvme1n1p1" 00:44:02.187 }, 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd10", 00:44:02.187 "bdev_name": "Nvme1n1p2" 00:44:02.187 }, 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd11", 00:44:02.187 "bdev_name": "Nvme2n1" 00:44:02.187 }, 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd12", 00:44:02.187 "bdev_name": "Nvme2n2" 00:44:02.187 }, 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd13", 00:44:02.187 "bdev_name": "Nvme2n3" 00:44:02.187 }, 00:44:02.187 { 00:44:02.187 "nbd_device": "/dev/nbd14", 00:44:02.187 "bdev_name": "Nvme3n1" 00:44:02.187 } 00:44:02.187 ]' 00:44:02.187 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:44:02.446 /dev/nbd1 00:44:02.446 /dev/nbd10 00:44:02.446 /dev/nbd11 00:44:02.446 /dev/nbd12 00:44:02.446 /dev/nbd13 00:44:02.446 /dev/nbd14' 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:44:02.446 /dev/nbd1 00:44:02.446 /dev/nbd10 00:44:02.446 /dev/nbd11 00:44:02.446 /dev/nbd12 00:44:02.446 /dev/nbd13 00:44:02.446 /dev/nbd14' 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:44:02.446 256+0 records in 00:44:02.446 256+0 records out 00:44:02.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110029 s, 95.3 MB/s 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:44:02.446 256+0 records in 00:44:02.446 256+0 records out 00:44:02.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194215 s, 5.4 MB/s 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:02.446 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:44:02.704 256+0 records in 00:44:02.704 256+0 records out 00:44:02.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179005 s, 5.9 MB/s 00:44:02.704 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:02.704 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:44:02.963 256+0 records in 00:44:02.963 256+0 records out 00:44:02.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192616 s, 5.4 MB/s 00:44:02.963 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:02.963 09:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:44:03.222 256+0 records in 00:44:03.222 256+0 records out 00:44:03.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188628 s, 5.6 MB/s 00:44:03.222 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:03.222 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:44:03.222 256+0 records in 00:44:03.222 256+0 records out 00:44:03.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194723 s, 5.4 MB/s 00:44:03.222 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:03.222 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:44:03.481 256+0 records in 00:44:03.481 256+0 records out 00:44:03.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190279 s, 5.5 MB/s 00:44:03.481 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:03.481 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:44:03.740 256+0 records in 00:44:03.740 256+0 records out 00:44:03.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191399 s, 5.5 MB/s 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:44:03.740 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:03.741 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:44:03.741 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:03.741 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:03.741 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:03.741 09:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:03.999 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:04.564 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:04.821 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:05.079 09:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:05.337 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:05.595 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:05.854 09:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:06.112 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:06.112 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:06.112 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:44:06.370 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:44:06.627 malloc_lvol_verify 00:44:06.627 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:44:06.885 377e64e4-69cf-413d-9fe5-d6f6c3159cd0 00:44:06.885 09:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:44:07.143 068ed86e-5ab3-4caf-855d-7fb9a7adb704 00:44:07.143 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:44:07.401 /dev/nbd0 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:44:07.401 mke2fs 1.47.0 (5-Feb-2023) 00:44:07.401 Discarding device blocks: 0/4096 done 00:44:07.401 Creating filesystem with 4096 1k blocks and 1024 inodes 00:44:07.401 00:44:07.401 Allocating group tables: 0/1 done 00:44:07.401 Writing inode tables: 0/1 done 00:44:07.401 Creating journal (1024 blocks): done 00:44:07.401 Writing superblocks and filesystem accounting information: 0/1 done 00:44:07.401 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:07.401 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63110 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63110 ']' 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63110 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63110 00:44:07.660 killing process with pid 63110 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63110' 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63110 00:44:07.660 09:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63110 00:44:09.107 09:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:44:09.107 00:44:09.107 real 0m15.552s 00:44:09.107 user 0m22.211s 00:44:09.107 sys 0m4.996s 00:44:09.107 ************************************ 00:44:09.107 END TEST bdev_nbd 00:44:09.107 ************************************ 00:44:09.107 09:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:09.107 09:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:44:09.107 skipping fio tests on NVMe due to multi-ns failures. 00:44:09.107 09:55:15 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:44:09.107 09:55:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:44:09.107 09:55:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:44:09.107 09:55:15 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:44:09.107 09:55:15 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:09.107 09:55:15 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:09.107 09:55:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:44:09.107 09:55:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:09.107 09:55:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:09.107 ************************************ 00:44:09.107 START TEST bdev_verify 00:44:09.107 ************************************ 00:44:09.107 09:55:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:09.107 [2024-12-09 09:55:16.032230] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:09.107 [2024-12-09 09:55:16.032437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63571 ] 00:44:09.366 [2024-12-09 09:55:16.218710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:09.366 [2024-12-09 09:55:16.355288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:09.366 [2024-12-09 09:55:16.355311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:10.303 Running I/O for 5 seconds... 00:44:12.651 18176.00 IOPS, 71.00 MiB/s [2024-12-09T09:55:20.631Z] 18464.00 IOPS, 72.12 MiB/s [2024-12-09T09:55:21.610Z] 18240.00 IOPS, 71.25 MiB/s [2024-12-09T09:55:22.547Z] 18112.00 IOPS, 70.75 MiB/s [2024-12-09T09:55:22.547Z] 17740.80 IOPS, 69.30 MiB/s 00:44:15.503 Latency(us) 00:44:15.503 [2024-12-09T09:55:22.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.503 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0xbd0bd 00:44:15.503 Nvme0n1 : 5.07 1262.08 4.93 0.00 0.00 101096.98 22282.24 136314.88 00:44:15.503 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:44:15.503 Nvme0n1 : 5.10 1229.37 4.80 0.00 0.00 103878.73 22639.71 88175.71 00:44:15.503 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0x4ff80 00:44:15.503 Nvme1n1p1 : 5.07 1261.46 4.93 0.00 0.00 100876.54 24307.90 127735.62 00:44:15.503 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x4ff80 length 0x4ff80 00:44:15.503 Nvme1n1p1 : 5.10 1228.84 4.80 0.00 0.00 103727.06 22282.24 83886.08 00:44:15.503 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0x4ff7f 00:44:15.503 Nvme1n1p2 : 5.08 1260.85 4.93 0.00 0.00 100699.50 27286.81 121539.49 00:44:15.503 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:44:15.503 Nvme1n1p2 : 5.11 1228.34 4.80 0.00 0.00 103587.70 21924.77 81979.58 00:44:15.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0x80000 00:44:15.503 Nvme2n1 : 5.08 1260.30 4.92 0.00 0.00 100521.82 27763.43 126782.37 00:44:15.503 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x80000 length 0x80000 00:44:15.503 Nvme2n1 : 5.11 1227.86 4.80 0.00 0.00 103440.09 22282.24 80549.70 00:44:15.503 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0x80000 00:44:15.503 Nvme2n2 : 5.08 1259.76 4.92 0.00 0.00 100360.46 27048.49 128688.87 00:44:15.503 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x80000 length 0x80000 00:44:15.503 Nvme2n2 : 5.11 1227.35 4.79 0.00 0.00 103289.74 22758.87 82932.83 00:44:15.503 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0x80000 00:44:15.503 Nvme2n3 : 5.10 1268.29 4.95 0.00 0.00 99561.07 5123.72 132501.88 00:44:15.503 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x80000 length 0x80000 00:44:15.503 Nvme2n3 : 5.11 1226.86 4.79 0.00 0.00 103112.31 21924.77 85315.96 00:44:15.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x0 length 0x20000 00:44:15.503 Nvme3n1 : 5.10 1267.67 4.95 0.00 0.00 99429.54 5630.14 135361.63 00:44:15.503 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.503 Verification LBA range: start 0x20000 length 0x20000 00:44:15.503 Nvme3n1 : 5.11 1226.39 4.79 0.00 0.00 102960.36 14358.34 89605.59 00:44:15.503 [2024-12-09T09:55:22.547Z] =================================================================================================================== 00:44:15.503 [2024-12-09T09:55:22.547Z] Total : 17435.42 68.11 0.00 0.00 101876.94 5123.72 136314.88 00:44:16.879 00:44:16.879 real 0m7.799s 00:44:16.879 user 0m14.341s 00:44:16.879 sys 0m0.323s 00:44:16.879 ************************************ 00:44:16.879 END TEST bdev_verify 00:44:16.879 ************************************ 00:44:16.879 09:55:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.879 09:55:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:16.879 09:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:16.879 09:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:44:16.879 09:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:16.879 09:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:16.879 ************************************ 00:44:16.879 START TEST bdev_verify_big_io 00:44:16.879 ************************************ 00:44:16.879 09:55:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:16.879 [2024-12-09 09:55:23.867718] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:16.879 [2024-12-09 09:55:23.867878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63673 ] 00:44:17.137 [2024-12-09 09:55:24.038642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:17.137 [2024-12-09 09:55:24.170271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:17.137 [2024-12-09 09:55:24.170301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:18.071 Running I/O for 5 seconds... 00:44:24.167 2228.00 IOPS, 139.25 MiB/s [2024-12-09T09:55:31.211Z] 3132.00 IOPS, 195.75 MiB/s 00:44:24.167 Latency(us) 00:44:24.167 [2024-12-09T09:55:31.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:24.167 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.167 Verification LBA range: start 0x0 length 0xbd0b 00:44:24.167 Nvme0n1 : 5.80 106.72 6.67 0.00 0.00 1143879.05 13107.20 1548079.48 00:44:24.167 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.167 Verification LBA range: start 0xbd0b length 0xbd0b 00:44:24.167 Nvme0n1 : 5.86 115.75 7.23 0.00 0.00 1062754.55 20256.58 1128649.08 00:44:24.167 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.167 Verification LBA range: start 0x0 length 0x4ff8 00:44:24.167 Nvme1n1p1 : 5.86 116.83 7.30 0.00 0.00 1037094.55 31218.97 1044763.00 00:44:24.168 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x4ff8 length 0x4ff8 00:44:24.168 Nvme1n1p1 : 5.86 112.15 7.01 0.00 0.00 1060222.67 60769.75 1243039.19 00:44:24.168 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x0 length 0x4ff7 00:44:24.168 Nvme1n1p2 : 5.95 111.98 7.00 0.00 0.00 1039918.56 60054.81 1616713.54 00:44:24.168 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x4ff7 length 0x4ff7 00:44:24.168 Nvme1n1p2 : 5.89 101.98 6.37 0.00 0.00 1150571.47 129642.12 1860745.77 00:44:24.168 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x0 length 0x8000 00:44:24.168 Nvme2n1 : 5.98 115.80 7.24 0.00 0.00 982017.28 81979.58 1639591.56 00:44:24.168 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x8000 length 0x8000 00:44:24.168 Nvme2n1 : 5.90 123.14 7.70 0.00 0.00 935138.49 33125.47 1075267.03 00:44:24.168 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x0 length 0x8000 00:44:24.168 Nvme2n2 : 5.99 115.28 7.21 0.00 0.00 953547.94 81979.58 1670095.59 00:44:24.168 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x8000 length 0x8000 00:44:24.168 Nvme2n2 : 5.90 124.78 7.80 0.00 0.00 894168.78 33840.41 1052389.00 00:44:24.168 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x0 length 0x8000 00:44:24.168 Nvme2n3 : 6.02 124.96 7.81 0.00 0.00 861959.94 29193.31 1494697.43 00:44:24.168 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x8000 length 0x8000 00:44:24.168 Nvme2n3 : 5.95 129.14 8.07 0.00 0.00 839197.94 45517.73 1006632.96 00:44:24.168 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x0 length 0x2000 00:44:24.168 Nvme3n1 : 6.05 139.79 8.74 0.00 0.00 750016.56 9413.35 1715851.64 00:44:24.168 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:24.168 Verification LBA range: start 0x2000 length 0x2000 00:44:24.168 Nvme3n1 : 6.03 148.31 9.27 0.00 0.00 713454.77 2770.39 1029510.98 00:44:24.168 [2024-12-09T09:55:31.212Z] =================================================================================================================== 00:44:24.168 [2024-12-09T09:55:31.212Z] Total : 1686.63 105.41 0.00 0.00 944981.45 2770.39 1860745.77 00:44:26.067 00:44:26.067 real 0m9.059s 00:44:26.067 user 0m16.858s 00:44:26.067 sys 0m0.368s 00:44:26.067 09:55:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:26.067 ************************************ 00:44:26.067 END TEST bdev_verify_big_io 00:44:26.067 ************************************ 00:44:26.067 09:55:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:26.067 09:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:26.067 09:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:44:26.067 09:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:26.067 09:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:26.067 ************************************ 00:44:26.067 START TEST bdev_write_zeroes 00:44:26.067 ************************************ 00:44:26.067 09:55:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:26.067 [2024-12-09 09:55:33.000353] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:26.067 [2024-12-09 09:55:33.000561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63789 ] 00:44:26.325 [2024-12-09 09:55:33.195195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.325 [2024-12-09 09:55:33.364779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.258 Running I/O for 1 seconds... 00:44:28.199 50624.00 IOPS, 197.75 MiB/s 00:44:28.199 Latency(us) 00:44:28.199 [2024-12-09T09:55:35.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:28.199 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme0n1 : 1.03 7215.50 28.19 0.00 0.00 17689.78 13285.93 34078.72 00:44:28.199 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme1n1p1 : 1.03 7203.84 28.14 0.00 0.00 17683.38 13524.25 33363.78 00:44:28.199 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme1n1p2 : 1.03 7192.22 28.09 0.00 0.00 17653.47 13047.62 32410.53 00:44:28.199 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme2n1 : 1.03 7181.27 28.05 0.00 0.00 17567.71 13524.25 31218.97 00:44:28.199 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme2n2 : 1.04 7170.65 28.01 0.00 0.00 17534.17 13285.93 30504.03 00:44:28.199 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme2n3 : 1.04 7159.93 27.97 0.00 0.00 17516.65 12392.26 32648.84 00:44:28.199 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:28.199 Nvme3n1 : 1.04 7149.27 27.93 0.00 0.00 17490.61 11081.54 34555.35 00:44:28.199 [2024-12-09T09:55:35.243Z] =================================================================================================================== 00:44:28.199 [2024-12-09T09:55:35.243Z] Total : 50272.68 196.38 0.00 0.00 17590.82 11081.54 34555.35 00:44:29.573 ************************************ 00:44:29.573 END TEST bdev_write_zeroes 00:44:29.573 ************************************ 00:44:29.573 00:44:29.573 real 0m3.327s 00:44:29.573 user 0m2.879s 00:44:29.573 sys 0m0.321s 00:44:29.573 09:55:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:29.573 09:55:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:29.573 09:55:36 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:29.573 09:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:44:29.573 09:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:29.573 09:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:29.573 ************************************ 00:44:29.573 START TEST bdev_json_nonenclosed 00:44:29.573 ************************************ 00:44:29.573 09:55:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:29.573 [2024-12-09 09:55:36.373823] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:29.573 [2024-12-09 09:55:36.374018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63849 ] 00:44:29.574 [2024-12-09 09:55:36.561392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.832 [2024-12-09 09:55:36.699578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:29.832 [2024-12-09 09:55:36.699686] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:29.832 [2024-12-09 09:55:36.699716] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:29.832 [2024-12-09 09:55:36.699732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:30.091 00:44:30.091 real 0m0.686s 00:44:30.091 user 0m0.422s 00:44:30.091 sys 0m0.158s 00:44:30.091 ************************************ 00:44:30.091 END TEST bdev_json_nonenclosed 00:44:30.091 ************************************ 00:44:30.091 09:55:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:30.091 09:55:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:30.091 09:55:36 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:30.091 09:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:44:30.091 09:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:30.091 09:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:30.091 ************************************ 00:44:30.091 START TEST bdev_json_nonarray 00:44:30.091 ************************************ 00:44:30.091 09:55:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:30.091 [2024-12-09 09:55:37.117381] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:30.091 [2024-12-09 09:55:37.117784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63874 ] 00:44:30.349 [2024-12-09 09:55:37.295544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.608 [2024-12-09 09:55:37.428526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:30.608 [2024-12-09 09:55:37.428657] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:30.608 [2024-12-09 09:55:37.428703] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:30.608 [2024-12-09 09:55:37.428717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:30.866 00:44:30.866 real 0m0.696s 00:44:30.866 user 0m0.448s 00:44:30.866 sys 0m0.142s 00:44:30.866 09:55:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:30.866 09:55:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:30.866 ************************************ 00:44:30.866 END TEST bdev_json_nonarray 00:44:30.866 ************************************ 00:44:30.867 09:55:37 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:44:30.867 09:55:37 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:44:30.867 09:55:37 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:44:30.867 09:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:30.867 09:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:30.867 09:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:30.867 ************************************ 00:44:30.867 START TEST bdev_gpt_uuid 00:44:30.867 ************************************ 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63905 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63905 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63905 ']' 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:30.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:30.867 09:55:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:30.867 [2024-12-09 09:55:37.903926] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:30.867 [2024-12-09 09:55:37.904134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:44:31.125 [2024-12-09 09:55:38.090252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:31.384 [2024-12-09 09:55:38.216320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:32.319 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:32.319 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:44:32.319 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:32.319 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.319 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:32.577 Some configs were skipped because the RPC state that can call them passed over. 00:44:32.577 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.577 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:44:32.577 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.577 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:32.577 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.577 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:44:32.578 { 00:44:32.578 "name": "Nvme1n1p1", 00:44:32.578 "aliases": [ 00:44:32.578 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:44:32.578 ], 00:44:32.578 "product_name": "GPT Disk", 00:44:32.578 "block_size": 4096, 00:44:32.578 "num_blocks": 655104, 00:44:32.578 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:44:32.578 "assigned_rate_limits": { 00:44:32.578 "rw_ios_per_sec": 0, 00:44:32.578 "rw_mbytes_per_sec": 0, 00:44:32.578 "r_mbytes_per_sec": 0, 00:44:32.578 "w_mbytes_per_sec": 0 00:44:32.578 }, 00:44:32.578 "claimed": false, 00:44:32.578 "zoned": false, 00:44:32.578 "supported_io_types": { 00:44:32.578 "read": true, 00:44:32.578 "write": true, 00:44:32.578 "unmap": true, 00:44:32.578 "flush": true, 00:44:32.578 "reset": true, 00:44:32.578 "nvme_admin": false, 00:44:32.578 "nvme_io": false, 00:44:32.578 "nvme_io_md": false, 00:44:32.578 "write_zeroes": true, 00:44:32.578 "zcopy": false, 00:44:32.578 "get_zone_info": false, 00:44:32.578 "zone_management": false, 00:44:32.578 "zone_append": false, 00:44:32.578 "compare": true, 00:44:32.578 "compare_and_write": false, 00:44:32.578 "abort": true, 00:44:32.578 "seek_hole": false, 00:44:32.578 "seek_data": false, 00:44:32.578 "copy": true, 00:44:32.578 "nvme_iov_md": false 00:44:32.578 }, 00:44:32.578 "driver_specific": { 00:44:32.578 "gpt": { 00:44:32.578 "base_bdev": "Nvme1n1", 00:44:32.578 "offset_blocks": 256, 00:44:32.578 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:44:32.578 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:44:32.578 "partition_name": "SPDK_TEST_first" 00:44:32.578 } 00:44:32.578 } 00:44:32.578 } 00:44:32.578 ]' 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.578 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:32.837 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.837 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:44:32.837 { 00:44:32.837 "name": "Nvme1n1p2", 00:44:32.837 "aliases": [ 00:44:32.837 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:44:32.837 ], 00:44:32.837 "product_name": "GPT Disk", 00:44:32.837 "block_size": 4096, 00:44:32.837 "num_blocks": 655103, 00:44:32.837 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:44:32.837 "assigned_rate_limits": { 00:44:32.837 "rw_ios_per_sec": 0, 00:44:32.837 "rw_mbytes_per_sec": 0, 00:44:32.837 "r_mbytes_per_sec": 0, 00:44:32.837 "w_mbytes_per_sec": 0 00:44:32.837 }, 00:44:32.837 "claimed": false, 00:44:32.837 "zoned": false, 00:44:32.837 "supported_io_types": { 00:44:32.837 "read": true, 00:44:32.837 "write": true, 00:44:32.837 "unmap": true, 00:44:32.837 "flush": true, 00:44:32.837 "reset": true, 00:44:32.837 "nvme_admin": false, 00:44:32.837 "nvme_io": false, 00:44:32.837 "nvme_io_md": false, 00:44:32.837 "write_zeroes": true, 00:44:32.837 "zcopy": false, 00:44:32.837 "get_zone_info": false, 00:44:32.837 "zone_management": false, 00:44:32.837 "zone_append": false, 00:44:32.837 "compare": true, 00:44:32.837 "compare_and_write": false, 00:44:32.837 "abort": true, 00:44:32.837 "seek_hole": false, 00:44:32.837 "seek_data": false, 00:44:32.837 "copy": true, 00:44:32.837 "nvme_iov_md": false 00:44:32.837 }, 00:44:32.837 "driver_specific": { 00:44:32.837 "gpt": { 00:44:32.837 "base_bdev": "Nvme1n1", 00:44:32.837 "offset_blocks": 655360, 00:44:32.837 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:44:32.837 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:44:32.837 "partition_name": "SPDK_TEST_second" 00:44:32.837 } 00:44:32.837 } 00:44:32.837 } 00:44:32.837 ]' 00:44:32.837 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:44:32.837 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:44:32.837 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63905 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63905 ']' 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63905 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63905 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:32.838 killing process with pid 63905 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63905' 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63905 00:44:32.838 09:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63905 00:44:35.373 ************************************ 00:44:35.373 END TEST bdev_gpt_uuid 00:44:35.373 ************************************ 00:44:35.373 00:44:35.373 real 0m4.261s 00:44:35.373 user 0m4.524s 00:44:35.373 sys 0m0.604s 00:44:35.373 09:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:35.373 09:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:44:35.373 09:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:35.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:35.633 Waiting for block devices as requested 00:44:35.633 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:44:35.891 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:35.891 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:44:35.891 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:44:41.234 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:44:41.234 09:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:44:41.234 09:55:47 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:44:41.234 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:44:41.234 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:44:41.234 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:44:41.234 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:44:41.234 09:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:44:41.234 00:44:41.234 real 1m7.080s 00:44:41.234 user 1m26.514s 00:44:41.234 sys 0m10.954s 00:44:41.234 09:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:41.234 ************************************ 00:44:41.234 END TEST blockdev_nvme_gpt 00:44:41.234 09:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:41.234 ************************************ 00:44:41.234 09:55:48 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:44:41.234 09:55:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:41.234 09:55:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:41.234 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:44:41.234 ************************************ 00:44:41.234 START TEST nvme 00:44:41.234 ************************************ 00:44:41.234 09:55:48 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:44:41.493 * Looking for test storage... 00:44:41.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:41.493 09:55:48 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:41.493 09:55:48 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:44:41.493 09:55:48 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:41.493 09:55:48 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.493 09:55:48 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.493 09:55:48 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.493 09:55:48 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.493 09:55:48 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.493 09:55:48 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.493 09:55:48 nvme -- scripts/common.sh@344 -- # case "$op" in 00:44:41.493 09:55:48 nvme -- scripts/common.sh@345 -- # : 1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.493 09:55:48 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.493 09:55:48 nvme -- scripts/common.sh@365 -- # decimal 1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@353 -- # local d=1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.493 09:55:48 nvme -- scripts/common.sh@355 -- # echo 1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.493 09:55:48 nvme -- scripts/common.sh@366 -- # decimal 2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@353 -- # local d=2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.493 09:55:48 nvme -- scripts/common.sh@355 -- # echo 2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.493 09:55:48 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.494 09:55:48 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.494 09:55:48 nvme -- scripts/common.sh@368 -- # return 0 00:44:41.494 09:55:48 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.494 09:55:48 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.494 --rc genhtml_branch_coverage=1 00:44:41.494 --rc genhtml_function_coverage=1 00:44:41.494 --rc genhtml_legend=1 00:44:41.494 --rc geninfo_all_blocks=1 00:44:41.494 --rc geninfo_unexecuted_blocks=1 00:44:41.494 00:44:41.494 ' 00:44:41.494 09:55:48 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.494 --rc genhtml_branch_coverage=1 00:44:41.494 --rc genhtml_function_coverage=1 00:44:41.494 --rc genhtml_legend=1 00:44:41.494 --rc geninfo_all_blocks=1 00:44:41.494 --rc geninfo_unexecuted_blocks=1 00:44:41.494 00:44:41.494 ' 00:44:41.494 09:55:48 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.494 --rc genhtml_branch_coverage=1 00:44:41.494 --rc genhtml_function_coverage=1 00:44:41.494 --rc genhtml_legend=1 00:44:41.494 --rc geninfo_all_blocks=1 00:44:41.494 --rc geninfo_unexecuted_blocks=1 00:44:41.494 00:44:41.494 ' 00:44:41.494 09:55:48 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.494 --rc genhtml_branch_coverage=1 00:44:41.494 --rc genhtml_function_coverage=1 00:44:41.494 --rc genhtml_legend=1 00:44:41.494 --rc geninfo_all_blocks=1 00:44:41.494 --rc geninfo_unexecuted_blocks=1 00:44:41.494 00:44:41.494 ' 00:44:41.494 09:55:48 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:42.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:42.626 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:42.626 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:44:42.626 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:44:42.626 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:44:42.884 09:55:49 nvme -- nvme/nvme.sh@79 -- # uname 00:44:42.884 09:55:49 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:44:42.884 09:55:49 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:44:42.884 09:55:49 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:44:42.884 Waiting for stub to ready for secondary processes... 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1075 -- # stubpid=64557 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64557 ]] 00:44:42.884 09:55:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:44:42.884 [2024-12-09 09:55:49.842454] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:42.884 [2024-12-09 09:55:49.842876] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:44:43.821 09:55:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:43.821 09:55:50 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64557 ]] 00:44:43.821 09:55:50 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:44:44.389 [2024-12-09 09:55:51.197285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:44.389 [2024-12-09 09:55:51.343864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:44.389 [2024-12-09 09:55:51.343954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:44.389 [2024-12-09 09:55:51.343958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:44.389 [2024-12-09 09:55:51.363359] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:44:44.389 [2024-12-09 09:55:51.363619] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:44:44.389 [2024-12-09 09:55:51.376496] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:44:44.389 [2024-12-09 09:55:51.376941] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:44:44.389 [2024-12-09 09:55:51.380063] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:44:44.389 [2024-12-09 09:55:51.380555] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:44:44.389 [2024-12-09 09:55:51.380827] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:44:44.389 [2024-12-09 09:55:51.383486] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:44:44.389 [2024-12-09 09:55:51.384013] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:44:44.389 [2024-12-09 09:55:51.384398] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:44:44.389 [2024-12-09 09:55:51.386993] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:44:44.389 [2024-12-09 09:55:51.387413] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:44:44.389 [2024-12-09 09:55:51.388320] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:44:44.389 [2024-12-09 09:55:51.388542] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:44:44.389 [2024-12-09 09:55:51.388604] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:44:44.957 done. 00:44:44.957 09:55:51 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:44.957 09:55:51 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:44:44.957 09:55:51 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:44:44.957 09:55:51 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:44:44.957 09:55:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:44.957 09:55:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:44.957 ************************************ 00:44:44.957 START TEST nvme_reset 00:44:44.957 ************************************ 00:44:44.957 09:55:51 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:44:45.216 Initializing NVMe Controllers 00:44:45.216 Skipping QEMU NVMe SSD at 0000:00:10.0 00:44:45.216 Skipping QEMU NVMe SSD at 0000:00:11.0 00:44:45.216 Skipping QEMU NVMe SSD at 0000:00:13.0 00:44:45.216 Skipping QEMU NVMe SSD at 0000:00:12.0 00:44:45.216 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:44:45.216 00:44:45.216 real 0m0.303s 00:44:45.216 user 0m0.106s 00:44:45.216 sys 0m0.154s 00:44:45.216 09:55:52 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:45.216 09:55:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:44:45.216 ************************************ 00:44:45.216 END TEST nvme_reset 00:44:45.216 ************************************ 00:44:45.216 09:55:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:44:45.216 09:55:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:45.216 09:55:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:45.216 09:55:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:45.216 ************************************ 00:44:45.216 START TEST nvme_identify 00:44:45.216 ************************************ 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:44:45.216 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:44:45.216 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:44:45.216 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:44:45.216 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:44:45.216 09:55:52 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:44:45.216 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:44:45.478 [2024-12-09 09:55:52.474807] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64590 terminated unexpected 00:44:45.478 ===================================================== 00:44:45.478 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:45.478 ===================================================== 00:44:45.478 Controller Capabilities/Features 00:44:45.478 ================================ 00:44:45.478 Vendor ID: 1b36 00:44:45.478 Subsystem Vendor ID: 1af4 00:44:45.478 Serial Number: 12340 00:44:45.478 Model Number: QEMU NVMe Ctrl 00:44:45.478 Firmware Version: 8.0.0 00:44:45.478 Recommended Arb Burst: 6 00:44:45.478 IEEE OUI Identifier: 00 54 52 00:44:45.478 Multi-path I/O 00:44:45.478 May have multiple subsystem ports: No 00:44:45.478 May have multiple controllers: No 00:44:45.478 Associated with SR-IOV VF: No 00:44:45.478 Max Data Transfer Size: 524288 00:44:45.478 Max Number of Namespaces: 256 00:44:45.478 Max Number of I/O Queues: 64 00:44:45.478 NVMe Specification Version (VS): 1.4 00:44:45.478 NVMe Specification Version (Identify): 1.4 00:44:45.478 Maximum Queue Entries: 2048 00:44:45.478 Contiguous Queues Required: Yes 00:44:45.478 Arbitration Mechanisms Supported 00:44:45.478 Weighted Round Robin: Not Supported 00:44:45.478 Vendor Specific: Not Supported 00:44:45.478 Reset Timeout: 7500 ms 00:44:45.478 Doorbell Stride: 4 bytes 00:44:45.478 NVM Subsystem Reset: Not Supported 00:44:45.478 Command Sets Supported 00:44:45.478 NVM Command Set: Supported 00:44:45.478 Boot Partition: Not Supported 00:44:45.478 Memory Page Size Minimum: 4096 bytes 00:44:45.478 Memory Page Size Maximum: 65536 bytes 00:44:45.478 Persistent Memory Region: Not Supported 00:44:45.478 Optional Asynchronous Events Supported 00:44:45.478 Namespace Attribute Notices: Supported 00:44:45.478 Firmware Activation Notices: Not Supported 00:44:45.478 ANA Change Notices: Not Supported 00:44:45.478 PLE Aggregate Log Change Notices: Not Supported 00:44:45.478 LBA Status Info Alert Notices: Not Supported 00:44:45.478 EGE Aggregate Log Change Notices: Not Supported 00:44:45.478 Normal NVM Subsystem Shutdown event: Not Supported 00:44:45.478 Zone Descriptor Change Notices: Not Supported 00:44:45.478 Discovery Log Change Notices: Not Supported 00:44:45.478 Controller Attributes 00:44:45.478 128-bit Host Identifier: Not Supported 00:44:45.478 Non-Operational Permissive Mode: Not Supported 00:44:45.478 NVM Sets: Not Supported 00:44:45.478 Read Recovery Levels: Not Supported 00:44:45.478 Endurance Groups: Not Supported 00:44:45.478 Predictable Latency Mode: Not Supported 00:44:45.478 Traffic Based Keep ALive: Not Supported 00:44:45.478 Namespace Granularity: Not Supported 00:44:45.478 SQ Associations: Not Supported 00:44:45.478 UUID List: Not Supported 00:44:45.478 Multi-Domain Subsystem: Not Supported 00:44:45.478 Fixed Capacity Management: Not Supported 00:44:45.478 Variable Capacity Management: Not Supported 00:44:45.478 Delete Endurance Group: Not Supported 00:44:45.478 Delete NVM Set: Not Supported 00:44:45.478 Extended LBA Formats Supported: Supported 00:44:45.478 Flexible Data Placement Supported: Not Supported 00:44:45.478 00:44:45.478 Controller Memory Buffer Support 00:44:45.478 ================================ 00:44:45.478 Supported: No 00:44:45.478 00:44:45.478 Persistent Memory Region Support 00:44:45.478 ================================ 00:44:45.478 Supported: No 00:44:45.478 00:44:45.478 Admin Command Set Attributes 00:44:45.478 ============================ 00:44:45.478 Security Send/Receive: Not Supported 00:44:45.478 Format NVM: Supported 00:44:45.478 Firmware Activate/Download: Not Supported 00:44:45.478 Namespace Management: Supported 00:44:45.478 Device Self-Test: Not Supported 00:44:45.478 Directives: Supported 00:44:45.478 NVMe-MI: Not Supported 00:44:45.478 Virtualization Management: Not Supported 00:44:45.478 Doorbell Buffer Config: Supported 00:44:45.478 Get LBA Status Capability: Not Supported 00:44:45.478 Command & Feature Lockdown Capability: Not Supported 00:44:45.478 Abort Command Limit: 4 00:44:45.478 Async Event Request Limit: 4 00:44:45.478 Number of Firmware Slots: N/A 00:44:45.478 Firmware Slot 1 Read-Only: N/A 00:44:45.478 Firmware Activation Without Reset: N/A 00:44:45.478 Multiple Update Detection Support: N/A 00:44:45.478 Firmware Update Granularity: No Information Provided 00:44:45.478 Per-Namespace SMART Log: Yes 00:44:45.478 Asymmetric Namespace Access Log Page: Not Supported 00:44:45.478 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:44:45.478 Command Effects Log Page: Supported 00:44:45.478 Get Log Page Extended Data: Supported 00:44:45.478 Telemetry Log Pages: Not Supported 00:44:45.478 Persistent Event Log Pages: Not Supported 00:44:45.478 Supported Log Pages Log Page: May Support 00:44:45.478 Commands Supported & Effects Log Page: Not Supported 00:44:45.478 Feature Identifiers & Effects Log Page:May Support 00:44:45.478 NVMe-MI Commands & Effects Log Page: May Support 00:44:45.478 Data Area 4 for Telemetry Log: Not Supported 00:44:45.478 Error Log Page Entries Supported: 1 00:44:45.478 Keep Alive: Not Supported 00:44:45.479 00:44:45.479 NVM Command Set Attributes 00:44:45.479 ========================== 00:44:45.479 Submission Queue Entry Size 00:44:45.479 Max: 64 00:44:45.479 Min: 64 00:44:45.479 Completion Queue Entry Size 00:44:45.479 Max: 16 00:44:45.479 Min: 16 00:44:45.479 Number of Namespaces: 256 00:44:45.479 Compare Command: Supported 00:44:45.479 Write Uncorrectable Command: Not Supported 00:44:45.479 Dataset Management Command: Supported 00:44:45.479 Write Zeroes Command: Supported 00:44:45.479 Set Features Save Field: Supported 00:44:45.479 Reservations: Not Supported 00:44:45.479 Timestamp: Supported 00:44:45.479 Copy: Supported 00:44:45.479 Volatile Write Cache: Present 00:44:45.479 Atomic Write Unit (Normal): 1 00:44:45.479 Atomic Write Unit (PFail): 1 00:44:45.479 Atomic Compare & Write Unit: 1 00:44:45.479 Fused Compare & Write: Not Supported 00:44:45.479 Scatter-Gather List 00:44:45.479 SGL Command Set: Supported 00:44:45.479 SGL Keyed: Not Supported 00:44:45.479 SGL Bit Bucket Descriptor: Not Supported 00:44:45.479 SGL Metadata Pointer: Not Supported 00:44:45.479 Oversized SGL: Not Supported 00:44:45.479 SGL Metadata Address: Not Supported 00:44:45.479 SGL Offset: Not Supported 00:44:45.479 Transport SGL Data Block: Not Supported 00:44:45.479 Replay Protected Memory Block: Not Supported 00:44:45.479 00:44:45.479 Firmware Slot Information 00:44:45.479 ========================= 00:44:45.479 Active slot: 1 00:44:45.479 Slot 1 Firmware Revision: 1.0 00:44:45.479 00:44:45.479 00:44:45.479 Commands Supported and Effects 00:44:45.479 ============================== 00:44:45.479 Admin Commands 00:44:45.479 -------------- 00:44:45.479 Delete I/O Submission Queue (00h): Supported 00:44:45.479 Create I/O Submission Queue (01h): Supported 00:44:45.479 Get Log Page (02h): Supported 00:44:45.479 Delete I/O Completion Queue (04h): Supported 00:44:45.479 Create I/O Completion Queue (05h): Supported 00:44:45.479 Identify (06h): Supported 00:44:45.479 Abort (08h): Supported 00:44:45.479 Set Features (09h): Supported 00:44:45.479 Get Features (0Ah): Supported 00:44:45.479 Asynchronous Event Request (0Ch): Supported 00:44:45.479 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:45.479 Directive Send (19h): Supported 00:44:45.479 Directive Receive (1Ah): Supported 00:44:45.479 Virtualization Management (1Ch): Supported 00:44:45.479 Doorbell Buffer Config (7Ch): Supported 00:44:45.479 Format NVM (80h): Supported LBA-Change 00:44:45.479 I/O Commands 00:44:45.479 ------------ 00:44:45.479 Flush (00h): Supported LBA-Change 00:44:45.479 Write (01h): Supported LBA-Change 00:44:45.479 Read (02h): Supported 00:44:45.479 Compare (05h): Supported 00:44:45.479 Write Zeroes (08h): Supported LBA-Change 00:44:45.479 Dataset Management (09h): Supported LBA-Change 00:44:45.479 Unknown (0Ch): Supported 00:44:45.479 Unknown (12h): Supported 00:44:45.479 Copy (19h): Supported LBA-Change 00:44:45.479 Unknown (1Dh): Supported LBA-Change 00:44:45.479 00:44:45.479 Error Log 00:44:45.479 ========= 00:44:45.479 00:44:45.479 Arbitration 00:44:45.479 =========== 00:44:45.479 Arbitration Burst: no limit 00:44:45.479 00:44:45.479 Power Management 00:44:45.479 ================ 00:44:45.479 Number of Power States: 1 00:44:45.479 Current Power State: Power State #0 00:44:45.479 Power State #0: 00:44:45.479 Max Power: 25.00 W 00:44:45.479 Non-Operational State: Operational 00:44:45.479 Entry Latency: 16 microseconds 00:44:45.479 Exit Latency: 4 microseconds 00:44:45.479 Relative Read Throughput: 0 00:44:45.479 Relative Read Latency: 0 00:44:45.479 Relative Write Throughput: 0 00:44:45.479 Relative Write Latency: 0 00:44:45.479 Idle Power[2024-12-09 09:55:52.476273] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64590 terminated unexpected 00:44:45.479 : Not Reported 00:44:45.479 Active Power: Not Reported 00:44:45.479 Non-Operational Permissive Mode: Not Supported 00:44:45.479 00:44:45.479 Health Information 00:44:45.479 ================== 00:44:45.479 Critical Warnings: 00:44:45.479 Available Spare Space: OK 00:44:45.479 Temperature: OK 00:44:45.479 Device Reliability: OK 00:44:45.479 Read Only: No 00:44:45.479 Volatile Memory Backup: OK 00:44:45.479 Current Temperature: 323 Kelvin (50 Celsius) 00:44:45.479 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:45.479 Available Spare: 0% 00:44:45.479 Available Spare Threshold: 0% 00:44:45.479 Life Percentage Used: 0% 00:44:45.479 Data Units Read: 654 00:44:45.479 Data Units Written: 582 00:44:45.479 Host Read Commands: 31347 00:44:45.479 Host Write Commands: 31133 00:44:45.479 Controller Busy Time: 0 minutes 00:44:45.479 Power Cycles: 0 00:44:45.479 Power On Hours: 0 hours 00:44:45.479 Unsafe Shutdowns: 0 00:44:45.479 Unrecoverable Media Errors: 0 00:44:45.479 Lifetime Error Log Entries: 0 00:44:45.479 Warning Temperature Time: 0 minutes 00:44:45.479 Critical Temperature Time: 0 minutes 00:44:45.479 00:44:45.479 Number of Queues 00:44:45.479 ================ 00:44:45.479 Number of I/O Submission Queues: 64 00:44:45.479 Number of I/O Completion Queues: 64 00:44:45.479 00:44:45.479 ZNS Specific Controller Data 00:44:45.479 ============================ 00:44:45.479 Zone Append Size Limit: 0 00:44:45.479 00:44:45.479 00:44:45.479 Active Namespaces 00:44:45.479 ================= 00:44:45.479 Namespace ID:1 00:44:45.479 Error Recovery Timeout: Unlimited 00:44:45.479 Command Set Identifier: NVM (00h) 00:44:45.479 Deallocate: Supported 00:44:45.479 Deallocated/Unwritten Error: Supported 00:44:45.479 Deallocated Read Value: All 0x00 00:44:45.479 Deallocate in Write Zeroes: Not Supported 00:44:45.479 Deallocated Guard Field: 0xFFFF 00:44:45.479 Flush: Supported 00:44:45.479 Reservation: Not Supported 00:44:45.479 Metadata Transferred as: Separate Metadata Buffer 00:44:45.479 Namespace Sharing Capabilities: Private 00:44:45.479 Size (in LBAs): 1548666 (5GiB) 00:44:45.479 Capacity (in LBAs): 1548666 (5GiB) 00:44:45.479 Utilization (in LBAs): 1548666 (5GiB) 00:44:45.479 Thin Provisioning: Not Supported 00:44:45.479 Per-NS Atomic Units: No 00:44:45.479 Maximum Single Source Range Length: 128 00:44:45.479 Maximum Copy Length: 128 00:44:45.479 Maximum Source Range Count: 128 00:44:45.479 NGUID/EUI64 Never Reused: No 00:44:45.479 Namespace Write Protected: No 00:44:45.479 Number of LBA Formats: 8 00:44:45.479 Current LBA Format: LBA Format #07 00:44:45.479 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:45.479 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:45.479 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:45.479 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:45.479 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:45.479 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:45.479 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:45.479 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:45.479 00:44:45.479 NVM Specific Namespace Data 00:44:45.479 =========================== 00:44:45.479 Logical Block Storage Tag Mask: 0 00:44:45.479 Protection Information Capabilities: 00:44:45.479 16b Guard Protection Information Storage Tag Support: No 00:44:45.479 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:45.479 Storage Tag Check Read Support: No 00:44:45.479 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.479 ===================================================== 00:44:45.479 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:44:45.479 ===================================================== 00:44:45.479 Controller Capabilities/Features 00:44:45.479 ================================ 00:44:45.479 Vendor ID: 1b36 00:44:45.479 Subsystem Vendor ID: 1af4 00:44:45.479 Serial Number: 12341 00:44:45.479 Model Number: QEMU NVMe Ctrl 00:44:45.479 Firmware Version: 8.0.0 00:44:45.479 Recommended Arb Burst: 6 00:44:45.479 IEEE OUI Identifier: 00 54 52 00:44:45.479 Multi-path I/O 00:44:45.479 May have multiple subsystem ports: No 00:44:45.479 May have multiple controllers: No 00:44:45.479 Associated with SR-IOV VF: No 00:44:45.479 Max Data Transfer Size: 524288 00:44:45.479 Max Number of Namespaces: 256 00:44:45.479 Max Number of I/O Queues: 64 00:44:45.479 NVMe Specification Version (VS): 1.4 00:44:45.479 NVMe Specification Version (Identify): 1.4 00:44:45.480 Maximum Queue Entries: 2048 00:44:45.480 Contiguous Queues Required: Yes 00:44:45.480 Arbitration Mechanisms Supported 00:44:45.480 Weighted Round Robin: Not Supported 00:44:45.480 Vendor Specific: Not Supported 00:44:45.480 Reset Timeout: 7500 ms 00:44:45.480 Doorbell Stride: 4 bytes 00:44:45.480 NVM Subsystem Reset: Not Supported 00:44:45.480 Command Sets Supported 00:44:45.480 NVM Command Set: Supported 00:44:45.480 Boot Partition: Not Supported 00:44:45.480 Memory Page Size Minimum: 4096 bytes 00:44:45.480 Memory Page Size Maximum: 65536 bytes 00:44:45.480 Persistent Memory Region: Not Supported 00:44:45.480 Optional Asynchronous Events Supported 00:44:45.480 Namespace Attribute Notices: Supported 00:44:45.480 Firmware Activation Notices: Not Supported 00:44:45.480 ANA Change Notices: Not Supported 00:44:45.480 PLE Aggregate Log Change Notices: Not Supported 00:44:45.480 LBA Status Info Alert Notices: Not Supported 00:44:45.480 EGE Aggregate Log Change Notices: Not Supported 00:44:45.480 Normal NVM Subsystem Shutdown event: Not Supported 00:44:45.480 Zone Descriptor Change Notices: Not Supported 00:44:45.480 Discovery Log Change Notices: Not Supported 00:44:45.480 Controller Attributes 00:44:45.480 128-bit Host Identifier: Not Supported 00:44:45.480 Non-Operational Permissive Mode: Not Supported 00:44:45.480 NVM Sets: Not Supported 00:44:45.480 Read Recovery Levels: Not Supported 00:44:45.480 Endurance Groups: Not Supported 00:44:45.480 Predictable Latency Mode: Not Supported 00:44:45.480 Traffic Based Keep ALive: Not Supported 00:44:45.480 Namespace Granularity: Not Supported 00:44:45.480 SQ Associations: Not Supported 00:44:45.480 UUID List: Not Supported 00:44:45.480 Multi-Domain Subsystem: Not Supported 00:44:45.480 Fixed Capacity Management: Not Supported 00:44:45.480 Variable Capacity Management: Not Supported 00:44:45.480 Delete Endurance Group: Not Supported 00:44:45.480 Delete NVM Set: Not Supported 00:44:45.480 Extended LBA Formats Supported: Supported 00:44:45.480 Flexible Data Placement Supported: Not Supported 00:44:45.480 00:44:45.480 Controller Memory Buffer Support 00:44:45.480 ================================ 00:44:45.480 Supported: No 00:44:45.480 00:44:45.480 Persistent Memory Region Support 00:44:45.480 ================================ 00:44:45.480 Supported: No 00:44:45.480 00:44:45.480 Admin Command Set Attributes 00:44:45.480 ============================ 00:44:45.480 Security Send/Receive: Not Supported 00:44:45.480 Format NVM: Supported 00:44:45.480 Firmware Activate/Download: Not Supported 00:44:45.480 Namespace Management: Supported 00:44:45.480 Device Self-Test: Not Supported 00:44:45.480 Directives: Supported 00:44:45.480 NVMe-MI: Not Supported 00:44:45.480 Virtualization Management: Not Supported 00:44:45.480 Doorbell Buffer Config: Supported 00:44:45.480 Get LBA Status Capability: Not Supported 00:44:45.480 Command & Feature Lockdown Capability: Not Supported 00:44:45.480 Abort Command Limit: 4 00:44:45.480 Async Event Request Limit: 4 00:44:45.480 Number of Firmware Slots: N/A 00:44:45.480 Firmware Slot 1 Read-Only: N/A 00:44:45.480 Firmware Activation Without Reset: N/A 00:44:45.480 Multiple Update Detection Support: N/A 00:44:45.480 Firmware Update Granularity: No Information Provided 00:44:45.480 Per-Namespace SMART Log: Yes 00:44:45.480 Asymmetric Namespace Access Log Page: Not Supported 00:44:45.480 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:44:45.480 Command Effects Log Page: Supported 00:44:45.480 Get Log Page Extended Data: Supported 00:44:45.480 Telemetry Log Pages: Not Supported 00:44:45.480 Persistent Event Log Pages: Not Supported 00:44:45.480 Supported Log Pages Log Page: May Support 00:44:45.480 Commands Supported & Effects Log Page: Not Supported 00:44:45.480 Feature Identifiers & Effects Log Page:May Support 00:44:45.480 NVMe-MI Commands & Effects Log Page: May Support 00:44:45.480 Data Area 4 for Telemetry Log: Not Supported 00:44:45.480 Error Log Page Entries Supported: 1 00:44:45.480 Keep Alive: Not Supported 00:44:45.480 00:44:45.480 NVM Command Set Attributes 00:44:45.480 ========================== 00:44:45.480 Submission Queue Entry Size 00:44:45.480 Max: 64 00:44:45.480 Min: 64 00:44:45.480 Completion Queue Entry Size 00:44:45.480 Max: 16 00:44:45.480 Min: 16 00:44:45.480 Number of Namespaces: 256 00:44:45.480 Compare Command: Supported 00:44:45.480 Write Uncorrectable Command: Not Supported 00:44:45.480 Dataset Management Command: Supported 00:44:45.480 Write Zeroes Command: Supported 00:44:45.480 Set Features Save Field: Supported 00:44:45.480 Reservations: Not Supported 00:44:45.480 Timestamp: Supported 00:44:45.480 Copy: Supported 00:44:45.480 Volatile Write Cache: Present 00:44:45.480 Atomic Write Unit (Normal): 1 00:44:45.480 Atomic Write Unit (PFail): 1 00:44:45.480 Atomic Compare & Write Unit: 1 00:44:45.480 Fused Compare & Write: Not Supported 00:44:45.480 Scatter-Gather List 00:44:45.480 SGL Command Set: Supported 00:44:45.480 SGL Keyed: Not Supported 00:44:45.480 SGL Bit Bucket Descriptor: Not Supported 00:44:45.480 SGL Metadata Pointer: Not Supported 00:44:45.480 Oversized SGL: Not Supported 00:44:45.480 SGL Metadata Address: Not Supported 00:44:45.480 SGL Offset: Not Supported 00:44:45.480 Transport SGL Data Block: Not Supported 00:44:45.480 Replay Protected Memory Block: Not Supported 00:44:45.480 00:44:45.480 Firmware Slot Information 00:44:45.480 ========================= 00:44:45.480 Active slot: 1 00:44:45.480 Slot 1 Firmware Revision: 1.0 00:44:45.480 00:44:45.480 00:44:45.480 Commands Supported and Effects 00:44:45.480 ============================== 00:44:45.480 Admin Commands 00:44:45.480 -------------- 00:44:45.480 Delete I/O Submission Queue (00h): Supported 00:44:45.480 Create I/O Submission Queue (01h): Supported 00:44:45.480 Get Log Page (02h): Supported 00:44:45.480 Delete I/O Completion Queue (04h): Supported 00:44:45.480 Create I/O Completion Queue (05h): Supported 00:44:45.480 Identify (06h): Supported 00:44:45.480 Abort (08h): Supported 00:44:45.480 Set Features (09h): Supported 00:44:45.480 Get Features (0Ah): Supported 00:44:45.480 Asynchronous Event Request (0Ch): Supported 00:44:45.480 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:45.480 Directive Send (19h): Supported 00:44:45.480 Directive Receive (1Ah): Supported 00:44:45.480 Virtualization Management (1Ch): Supported 00:44:45.480 Doorbell Buffer Config (7Ch): Supported 00:44:45.480 Format NVM (80h): Supported LBA-Change 00:44:45.480 I/O Commands 00:44:45.480 ------------ 00:44:45.480 Flush (00h): Supported LBA-Change 00:44:45.480 Write (01h): Supported LBA-Change 00:44:45.480 Read (02h): Supported 00:44:45.480 Compare (05h): Supported 00:44:45.480 Write Zeroes (08h): Supported LBA-Change 00:44:45.480 Dataset Management (09h): Supported LBA-Change 00:44:45.480 Unknown (0Ch): Supported 00:44:45.480 Unknown (12h): Supported 00:44:45.480 Copy (19h): Supported LBA-Change 00:44:45.480 Unknown (1Dh): Supported LBA-Change 00:44:45.480 00:44:45.480 Error Log 00:44:45.480 ========= 00:44:45.480 00:44:45.480 Arbitration 00:44:45.480 =========== 00:44:45.480 Arbitration Burst: no limit 00:44:45.480 00:44:45.480 Power Management 00:44:45.480 ================ 00:44:45.480 Number of Power States: 1 00:44:45.480 Current Power State: Power State #0 00:44:45.480 Power State #0: 00:44:45.480 Max Power: 25.00 W 00:44:45.480 Non-Operational State: Operational 00:44:45.480 Entry Latency: 16 microseconds 00:44:45.480 Exit Latency: 4 microseconds 00:44:45.480 Relative Read Throughput: 0 00:44:45.480 Relative Read Latency: 0 00:44:45.480 Relative Write Throughput: 0 00:44:45.480 Relative Write Latency: 0 00:44:45.480 Idle Power: Not Reported 00:44:45.480 Active Power: Not Reported 00:44:45.480 Non-Operational Permissive Mode: Not Supported 00:44:45.480 00:44:45.480 Health Information 00:44:45.480 ================== 00:44:45.480 Critical Warnings: 00:44:45.480 Available Spare Space: OK 00:44:45.480 Temperature: [2024-12-09 09:55:52.477341] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64590 terminated unexpected 00:44:45.480 OK 00:44:45.480 Device Reliability: OK 00:44:45.480 Read Only: No 00:44:45.480 Volatile Memory Backup: OK 00:44:45.480 Current Temperature: 323 Kelvin (50 Celsius) 00:44:45.480 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:45.480 Available Spare: 0% 00:44:45.480 Available Spare Threshold: 0% 00:44:45.480 Life Percentage Used: 0% 00:44:45.480 Data Units Read: 988 00:44:45.480 Data Units Written: 854 00:44:45.480 Host Read Commands: 46392 00:44:45.480 Host Write Commands: 45202 00:44:45.480 Controller Busy Time: 0 minutes 00:44:45.480 Power Cycles: 0 00:44:45.480 Power On Hours: 0 hours 00:44:45.480 Unsafe Shutdowns: 0 00:44:45.480 Unrecoverable Media Errors: 0 00:44:45.480 Lifetime Error Log Entries: 0 00:44:45.480 Warning Temperature Time: 0 minutes 00:44:45.480 Critical Temperature Time: 0 minutes 00:44:45.480 00:44:45.480 Number of Queues 00:44:45.480 ================ 00:44:45.481 Number of I/O Submission Queues: 64 00:44:45.481 Number of I/O Completion Queues: 64 00:44:45.481 00:44:45.481 ZNS Specific Controller Data 00:44:45.481 ============================ 00:44:45.481 Zone Append Size Limit: 0 00:44:45.481 00:44:45.481 00:44:45.481 Active Namespaces 00:44:45.481 ================= 00:44:45.481 Namespace ID:1 00:44:45.481 Error Recovery Timeout: Unlimited 00:44:45.481 Command Set Identifier: NVM (00h) 00:44:45.481 Deallocate: Supported 00:44:45.481 Deallocated/Unwritten Error: Supported 00:44:45.481 Deallocated Read Value: All 0x00 00:44:45.481 Deallocate in Write Zeroes: Not Supported 00:44:45.481 Deallocated Guard Field: 0xFFFF 00:44:45.481 Flush: Supported 00:44:45.481 Reservation: Not Supported 00:44:45.481 Namespace Sharing Capabilities: Private 00:44:45.481 Size (in LBAs): 1310720 (5GiB) 00:44:45.481 Capacity (in LBAs): 1310720 (5GiB) 00:44:45.481 Utilization (in LBAs): 1310720 (5GiB) 00:44:45.481 Thin Provisioning: Not Supported 00:44:45.481 Per-NS Atomic Units: No 00:44:45.481 Maximum Single Source Range Length: 128 00:44:45.481 Maximum Copy Length: 128 00:44:45.481 Maximum Source Range Count: 128 00:44:45.481 NGUID/EUI64 Never Reused: No 00:44:45.481 Namespace Write Protected: No 00:44:45.481 Number of LBA Formats: 8 00:44:45.481 Current LBA Format: LBA Format #04 00:44:45.481 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:45.481 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:45.481 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:45.481 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:45.481 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:45.481 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:45.481 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:45.481 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:45.481 00:44:45.481 NVM Specific Namespace Data 00:44:45.481 =========================== 00:44:45.481 Logical Block Storage Tag Mask: 0 00:44:45.481 Protection Information Capabilities: 00:44:45.481 16b Guard Protection Information Storage Tag Support: No 00:44:45.481 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:45.481 Storage Tag Check Read Support: No 00:44:45.481 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.481 ===================================================== 00:44:45.481 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:44:45.481 ===================================================== 00:44:45.481 Controller Capabilities/Features 00:44:45.481 ================================ 00:44:45.481 Vendor ID: 1b36 00:44:45.481 Subsystem Vendor ID: 1af4 00:44:45.481 Serial Number: 12343 00:44:45.481 Model Number: QEMU NVMe Ctrl 00:44:45.481 Firmware Version: 8.0.0 00:44:45.481 Recommended Arb Burst: 6 00:44:45.481 IEEE OUI Identifier: 00 54 52 00:44:45.481 Multi-path I/O 00:44:45.481 May have multiple subsystem ports: No 00:44:45.481 May have multiple controllers: Yes 00:44:45.481 Associated with SR-IOV VF: No 00:44:45.481 Max Data Transfer Size: 524288 00:44:45.481 Max Number of Namespaces: 256 00:44:45.481 Max Number of I/O Queues: 64 00:44:45.481 NVMe Specification Version (VS): 1.4 00:44:45.481 NVMe Specification Version (Identify): 1.4 00:44:45.481 Maximum Queue Entries: 2048 00:44:45.481 Contiguous Queues Required: Yes 00:44:45.481 Arbitration Mechanisms Supported 00:44:45.481 Weighted Round Robin: Not Supported 00:44:45.481 Vendor Specific: Not Supported 00:44:45.481 Reset Timeout: 7500 ms 00:44:45.481 Doorbell Stride: 4 bytes 00:44:45.481 NVM Subsystem Reset: Not Supported 00:44:45.481 Command Sets Supported 00:44:45.481 NVM Command Set: Supported 00:44:45.481 Boot Partition: Not Supported 00:44:45.481 Memory Page Size Minimum: 4096 bytes 00:44:45.481 Memory Page Size Maximum: 65536 bytes 00:44:45.481 Persistent Memory Region: Not Supported 00:44:45.481 Optional Asynchronous Events Supported 00:44:45.481 Namespace Attribute Notices: Supported 00:44:45.481 Firmware Activation Notices: Not Supported 00:44:45.481 ANA Change Notices: Not Supported 00:44:45.481 PLE Aggregate Log Change Notices: Not Supported 00:44:45.481 LBA Status Info Alert Notices: Not Supported 00:44:45.481 EGE Aggregate Log Change Notices: Not Supported 00:44:45.481 Normal NVM Subsystem Shutdown event: Not Supported 00:44:45.481 Zone Descriptor Change Notices: Not Supported 00:44:45.481 Discovery Log Change Notices: Not Supported 00:44:45.481 Controller Attributes 00:44:45.481 128-bit Host Identifier: Not Supported 00:44:45.481 Non-Operational Permissive Mode: Not Supported 00:44:45.481 NVM Sets: Not Supported 00:44:45.481 Read Recovery Levels: Not Supported 00:44:45.481 Endurance Groups: Supported 00:44:45.481 Predictable Latency Mode: Not Supported 00:44:45.481 Traffic Based Keep ALive: Not Supported 00:44:45.481 Namespace Granularity: Not Supported 00:44:45.481 SQ Associations: Not Supported 00:44:45.481 UUID List: Not Supported 00:44:45.481 Multi-Domain Subsystem: Not Supported 00:44:45.481 Fixed Capacity Management: Not Supported 00:44:45.481 Variable Capacity Management: Not Supported 00:44:45.481 Delete Endurance Group: Not Supported 00:44:45.481 Delete NVM Set: Not Supported 00:44:45.481 Extended LBA Formats Supported: Supported 00:44:45.481 Flexible Data Placement Supported: Supported 00:44:45.481 00:44:45.481 Controller Memory Buffer Support 00:44:45.481 ================================ 00:44:45.481 Supported: No 00:44:45.481 00:44:45.481 Persistent Memory Region Support 00:44:45.481 ================================ 00:44:45.481 Supported: No 00:44:45.481 00:44:45.481 Admin Command Set Attributes 00:44:45.481 ============================ 00:44:45.481 Security Send/Receive: Not Supported 00:44:45.481 Format NVM: Supported 00:44:45.481 Firmware Activate/Download: Not Supported 00:44:45.481 Namespace Management: Supported 00:44:45.481 Device Self-Test: Not Supported 00:44:45.481 Directives: Supported 00:44:45.481 NVMe-MI: Not Supported 00:44:45.481 Virtualization Management: Not Supported 00:44:45.481 Doorbell Buffer Config: Supported 00:44:45.481 Get LBA Status Capability: Not Supported 00:44:45.481 Command & Feature Lockdown Capability: Not Supported 00:44:45.481 Abort Command Limit: 4 00:44:45.481 Async Event Request Limit: 4 00:44:45.481 Number of Firmware Slots: N/A 00:44:45.481 Firmware Slot 1 Read-Only: N/A 00:44:45.481 Firmware Activation Without Reset: N/A 00:44:45.481 Multiple Update Detection Support: N/A 00:44:45.481 Firmware Update Granularity: No Information Provided 00:44:45.481 Per-Namespace SMART Log: Yes 00:44:45.481 Asymmetric Namespace Access Log Page: Not Supported 00:44:45.481 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:44:45.481 Command Effects Log Page: Supported 00:44:45.481 Get Log Page Extended Data: Supported 00:44:45.481 Telemetry Log Pages: Not Supported 00:44:45.481 Persistent Event Log Pages: Not Supported 00:44:45.481 Supported Log Pages Log Page: May Support 00:44:45.481 Commands Supported & Effects Log Page: Not Supported 00:44:45.481 Feature Identifiers & Effects Log Page:May Support 00:44:45.481 NVMe-MI Commands & Effects Log Page: May Support 00:44:45.481 Data Area 4 for Telemetry Log: Not Supported 00:44:45.481 Error Log Page Entries Supported: 1 00:44:45.481 Keep Alive: Not Supported 00:44:45.481 00:44:45.481 NVM Command Set Attributes 00:44:45.481 ========================== 00:44:45.481 Submission Queue Entry Size 00:44:45.481 Max: 64 00:44:45.481 Min: 64 00:44:45.481 Completion Queue Entry Size 00:44:45.481 Max: 16 00:44:45.481 Min: 16 00:44:45.481 Number of Namespaces: 256 00:44:45.481 Compare Command: Supported 00:44:45.481 Write Uncorrectable Command: Not Supported 00:44:45.481 Dataset Management Command: Supported 00:44:45.481 Write Zeroes Command: Supported 00:44:45.481 Set Features Save Field: Supported 00:44:45.481 Reservations: Not Supported 00:44:45.481 Timestamp: Supported 00:44:45.481 Copy: Supported 00:44:45.481 Volatile Write Cache: Present 00:44:45.481 Atomic Write Unit (Normal): 1 00:44:45.481 Atomic Write Unit (PFail): 1 00:44:45.481 Atomic Compare & Write Unit: 1 00:44:45.481 Fused Compare & Write: Not Supported 00:44:45.481 Scatter-Gather List 00:44:45.481 SGL Command Set: Supported 00:44:45.481 SGL Keyed: Not Supported 00:44:45.481 SGL Bit Bucket Descriptor: Not Supported 00:44:45.481 SGL Metadata Pointer: Not Supported 00:44:45.481 Oversized SGL: Not Supported 00:44:45.481 SGL Metadata Address: Not Supported 00:44:45.481 SGL Offset: Not Supported 00:44:45.481 Transport SGL Data Block: Not Supported 00:44:45.481 Replay Protected Memory Block: Not Supported 00:44:45.481 00:44:45.481 Firmware Slot Information 00:44:45.482 ========================= 00:44:45.482 Active slot: 1 00:44:45.482 Slot 1 Firmware Revision: 1.0 00:44:45.482 00:44:45.482 00:44:45.482 Commands Supported and Effects 00:44:45.482 ============================== 00:44:45.482 Admin Commands 00:44:45.482 -------------- 00:44:45.482 Delete I/O Submission Queue (00h): Supported 00:44:45.482 Create I/O Submission Queue (01h): Supported 00:44:45.482 Get Log Page (02h): Supported 00:44:45.482 Delete I/O Completion Queue (04h): Supported 00:44:45.482 Create I/O Completion Queue (05h): Supported 00:44:45.482 Identify (06h): Supported 00:44:45.482 Abort (08h): Supported 00:44:45.482 Set Features (09h): Supported 00:44:45.482 Get Features (0Ah): Supported 00:44:45.482 Asynchronous Event Request (0Ch): Supported 00:44:45.482 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:45.482 Directive Send (19h): Supported 00:44:45.482 Directive Receive (1Ah): Supported 00:44:45.482 Virtualization Management (1Ch): Supported 00:44:45.482 Doorbell Buffer Config (7Ch): Supported 00:44:45.482 Format NVM (80h): Supported LBA-Change 00:44:45.482 I/O Commands 00:44:45.482 ------------ 00:44:45.482 Flush (00h): Supported LBA-Change 00:44:45.482 Write (01h): Supported LBA-Change 00:44:45.482 Read (02h): Supported 00:44:45.482 Compare (05h): Supported 00:44:45.482 Write Zeroes (08h): Supported LBA-Change 00:44:45.482 Dataset Management (09h): Supported LBA-Change 00:44:45.482 Unknown (0Ch): Supported 00:44:45.482 Unknown (12h): Supported 00:44:45.482 Copy (19h): Supported LBA-Change 00:44:45.482 Unknown (1Dh): Supported LBA-Change 00:44:45.482 00:44:45.482 Error Log 00:44:45.482 ========= 00:44:45.482 00:44:45.482 Arbitration 00:44:45.482 =========== 00:44:45.482 Arbitration Burst: no limit 00:44:45.482 00:44:45.482 Power Management 00:44:45.482 ================ 00:44:45.482 Number of Power States: 1 00:44:45.482 Current Power State: Power State #0 00:44:45.482 Power State #0: 00:44:45.482 Max Power: 25.00 W 00:44:45.482 Non-Operational State: Operational 00:44:45.482 Entry Latency: 16 microseconds 00:44:45.482 Exit Latency: 4 microseconds 00:44:45.482 Relative Read Throughput: 0 00:44:45.482 Relative Read Latency: 0 00:44:45.482 Relative Write Throughput: 0 00:44:45.482 Relative Write Latency: 0 00:44:45.482 Idle Power: Not Reported 00:44:45.482 Active Power: Not Reported 00:44:45.482 Non-Operational Permissive Mode: Not Supported 00:44:45.482 00:44:45.482 Health Information 00:44:45.482 ================== 00:44:45.482 Critical Warnings: 00:44:45.482 Available Spare Space: OK 00:44:45.482 Temperature: OK 00:44:45.482 Device Reliability: OK 00:44:45.482 Read Only: No 00:44:45.482 Volatile Memory Backup: OK 00:44:45.482 Current Temperature: 323 Kelvin (50 Celsius) 00:44:45.482 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:45.482 Available Spare: 0% 00:44:45.482 Available Spare Threshold: 0% 00:44:45.482 Life Percentage Used: 0% 00:44:45.482 Data Units Read: 767 00:44:45.482 Data Units Written: 696 00:44:45.482 Host Read Commands: 32586 00:44:45.482 Host Write Commands: 32010 00:44:45.482 Controller Busy Time: 0 minutes 00:44:45.482 Power Cycles: 0 00:44:45.482 Power On Hours: 0 hours 00:44:45.482 Unsafe Shutdowns: 0 00:44:45.482 Unrecoverable Media Errors: 0 00:44:45.482 Lifetime Error Log Entries: 0 00:44:45.482 Warning Temperature Time: 0 minutes 00:44:45.482 Critical Temperature Time: 0 minutes 00:44:45.482 00:44:45.482 Number of Queues 00:44:45.482 ================ 00:44:45.482 Number of I/O Submission Queues: 64 00:44:45.482 Number of I/O Completion Queues: 64 00:44:45.482 00:44:45.482 ZNS Specific Controller Data 00:44:45.482 ============================ 00:44:45.482 Zone Append Size Limit: 0 00:44:45.482 00:44:45.482 00:44:45.482 Active Namespaces 00:44:45.482 ================= 00:44:45.482 Namespace ID:1 00:44:45.482 Error Recovery Timeout: Unlimited 00:44:45.482 Command Set Identifier: NVM (00h) 00:44:45.482 Deallocate: Supported 00:44:45.482 Deallocated/Unwritten Error: Supported 00:44:45.482 Deallocated Read Value: All 0x00 00:44:45.482 Deallocate in Write Zeroes: Not Supported 00:44:45.482 Deallocated Guard Field: 0xFFFF 00:44:45.482 Flush: Supported 00:44:45.482 Reservation: Not Supported 00:44:45.482 Namespace Sharing Capabilities: Multiple Controllers 00:44:45.482 Size (in LBAs): 262144 (1GiB) 00:44:45.482 Capacity (in LBAs): 262144 (1GiB) 00:44:45.482 Utilization (in LBAs): 262144 (1GiB) 00:44:45.482 Thin Provisioning: Not Supported 00:44:45.482 Per-NS Atomic Units: No 00:44:45.482 Maximum Single Source Range Length: 128 00:44:45.482 Maximum Copy Length: 128 00:44:45.482 Maximum Source Range Count: 128 00:44:45.482 NGUID/EUI64 Never Reused: No 00:44:45.482 Namespace Write Protected: No 00:44:45.482 Endurance group ID: 1 00:44:45.482 Number of LBA Formats: 8 00:44:45.482 Current LBA Format: LBA Format #04 00:44:45.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:45.482 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:45.482 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:45.482 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:45.482 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:45.482 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:45.482 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:45.482 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:45.482 00:44:45.482 Get Feature FDP: 00:44:45.482 ================ 00:44:45.482 Enabled: Yes 00:44:45.482 FDP configuration index: 0 00:44:45.482 00:44:45.482 FDP configurations log page 00:44:45.482 =========================== 00:44:45.482 Number of FDP configurations: 1 00:44:45.482 Version: 0 00:44:45.482 Size: 112 00:44:45.482 FDP Configuration Descriptor: 0 00:44:45.482 Descriptor Size: 96 00:44:45.482 Reclaim Group Identifier format: 2 00:44:45.482 FDP Volatile Write Cache: Not Present 00:44:45.482 FDP Configuration: Valid 00:44:45.482 Vendor Specific Size: 0 00:44:45.482 Number of Reclaim Groups: 2 00:44:45.482 Number of Recalim Unit Handles: 8 00:44:45.482 Max Placement Identifiers: 128 00:44:45.482 Number of Namespaces Suppprted: 256 00:44:45.482 Reclaim unit Nominal Size: 6000000 bytes 00:44:45.482 Estimated Reclaim Unit Time Limit: Not Reported 00:44:45.482 RUH Desc #000: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #001: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #002: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #003: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #004: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #005: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #006: RUH Type: Initially Isolated 00:44:45.482 RUH Desc #007: RUH Type: Initially Isolated 00:44:45.482 00:44:45.482 FDP reclaim unit handle usage log page 00:44:45.482 ====================================== 00:44:45.482 Number of Reclaim Unit Handles: 8 00:44:45.482 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:44:45.482 RUH Usage Desc #001: RUH Attributes: Unused 00:44:45.482 RUH Usage Desc #002: RUH Attributes: Unused 00:44:45.482 RUH Usage Desc #003: RUH Attributes: Unused 00:44:45.482 RUH Usage Desc #004: RUH Attributes: Unused 00:44:45.482 RUH Usage Desc #005: RUH Attributes: Unused 00:44:45.482 RUH Usage Desc #006: RUH Attributes: Unused 00:44:45.482 RUH Usage Desc #007: RUH Attributes: Unused 00:44:45.482 00:44:45.482 FDP statistics log page 00:44:45.482 ======================= 00:44:45.482 Host bytes with metadata written: 435658752 00:44:45.482 Media[2024-12-09 09:55:52.479236] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64590 terminated unexpected 00:44:45.482 bytes with metadata written: 435724288 00:44:45.482 Media bytes erased: 0 00:44:45.482 00:44:45.482 FDP events log page 00:44:45.482 =================== 00:44:45.482 Number of FDP events: 0 00:44:45.482 00:44:45.482 NVM Specific Namespace Data 00:44:45.482 =========================== 00:44:45.482 Logical Block Storage Tag Mask: 0 00:44:45.482 Protection Information Capabilities: 00:44:45.482 16b Guard Protection Information Storage Tag Support: No 00:44:45.482 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:45.482 Storage Tag Check Read Support: No 00:44:45.482 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.482 ===================================================== 00:44:45.482 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:44:45.482 ===================================================== 00:44:45.482 Controller Capabilities/Features 00:44:45.482 ================================ 00:44:45.482 Vendor ID: 1b36 00:44:45.482 Subsystem Vendor ID: 1af4 00:44:45.483 Serial Number: 12342 00:44:45.483 Model Number: QEMU NVMe Ctrl 00:44:45.483 Firmware Version: 8.0.0 00:44:45.483 Recommended Arb Burst: 6 00:44:45.483 IEEE OUI Identifier: 00 54 52 00:44:45.483 Multi-path I/O 00:44:45.483 May have multiple subsystem ports: No 00:44:45.483 May have multiple controllers: No 00:44:45.483 Associated with SR-IOV VF: No 00:44:45.483 Max Data Transfer Size: 524288 00:44:45.483 Max Number of Namespaces: 256 00:44:45.483 Max Number of I/O Queues: 64 00:44:45.483 NVMe Specification Version (VS): 1.4 00:44:45.483 NVMe Specification Version (Identify): 1.4 00:44:45.483 Maximum Queue Entries: 2048 00:44:45.483 Contiguous Queues Required: Yes 00:44:45.483 Arbitration Mechanisms Supported 00:44:45.483 Weighted Round Robin: Not Supported 00:44:45.483 Vendor Specific: Not Supported 00:44:45.483 Reset Timeout: 7500 ms 00:44:45.483 Doorbell Stride: 4 bytes 00:44:45.483 NVM Subsystem Reset: Not Supported 00:44:45.483 Command Sets Supported 00:44:45.483 NVM Command Set: Supported 00:44:45.483 Boot Partition: Not Supported 00:44:45.483 Memory Page Size Minimum: 4096 bytes 00:44:45.483 Memory Page Size Maximum: 65536 bytes 00:44:45.483 Persistent Memory Region: Not Supported 00:44:45.483 Optional Asynchronous Events Supported 00:44:45.483 Namespace Attribute Notices: Supported 00:44:45.483 Firmware Activation Notices: Not Supported 00:44:45.483 ANA Change Notices: Not Supported 00:44:45.483 PLE Aggregate Log Change Notices: Not Supported 00:44:45.483 LBA Status Info Alert Notices: Not Supported 00:44:45.483 EGE Aggregate Log Change Notices: Not Supported 00:44:45.483 Normal NVM Subsystem Shutdown event: Not Supported 00:44:45.483 Zone Descriptor Change Notices: Not Supported 00:44:45.483 Discovery Log Change Notices: Not Supported 00:44:45.483 Controller Attributes 00:44:45.483 128-bit Host Identifier: Not Supported 00:44:45.483 Non-Operational Permissive Mode: Not Supported 00:44:45.483 NVM Sets: Not Supported 00:44:45.483 Read Recovery Levels: Not Supported 00:44:45.483 Endurance Groups: Not Supported 00:44:45.483 Predictable Latency Mode: Not Supported 00:44:45.483 Traffic Based Keep ALive: Not Supported 00:44:45.483 Namespace Granularity: Not Supported 00:44:45.483 SQ Associations: Not Supported 00:44:45.483 UUID List: Not Supported 00:44:45.483 Multi-Domain Subsystem: Not Supported 00:44:45.483 Fixed Capacity Management: Not Supported 00:44:45.483 Variable Capacity Management: Not Supported 00:44:45.483 Delete Endurance Group: Not Supported 00:44:45.483 Delete NVM Set: Not Supported 00:44:45.483 Extended LBA Formats Supported: Supported 00:44:45.483 Flexible Data Placement Supported: Not Supported 00:44:45.483 00:44:45.483 Controller Memory Buffer Support 00:44:45.483 ================================ 00:44:45.483 Supported: No 00:44:45.483 00:44:45.483 Persistent Memory Region Support 00:44:45.483 ================================ 00:44:45.483 Supported: No 00:44:45.483 00:44:45.483 Admin Command Set Attributes 00:44:45.483 ============================ 00:44:45.483 Security Send/Receive: Not Supported 00:44:45.483 Format NVM: Supported 00:44:45.483 Firmware Activate/Download: Not Supported 00:44:45.483 Namespace Management: Supported 00:44:45.483 Device Self-Test: Not Supported 00:44:45.483 Directives: Supported 00:44:45.483 NVMe-MI: Not Supported 00:44:45.483 Virtualization Management: Not Supported 00:44:45.483 Doorbell Buffer Config: Supported 00:44:45.483 Get LBA Status Capability: Not Supported 00:44:45.483 Command & Feature Lockdown Capability: Not Supported 00:44:45.483 Abort Command Limit: 4 00:44:45.483 Async Event Request Limit: 4 00:44:45.483 Number of Firmware Slots: N/A 00:44:45.483 Firmware Slot 1 Read-Only: N/A 00:44:45.483 Firmware Activation Without Reset: N/A 00:44:45.483 Multiple Update Detection Support: N/A 00:44:45.483 Firmware Update Granularity: No Information Provided 00:44:45.483 Per-Namespace SMART Log: Yes 00:44:45.483 Asymmetric Namespace Access Log Page: Not Supported 00:44:45.483 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:44:45.483 Command Effects Log Page: Supported 00:44:45.483 Get Log Page Extended Data: Supported 00:44:45.483 Telemetry Log Pages: Not Supported 00:44:45.483 Persistent Event Log Pages: Not Supported 00:44:45.483 Supported Log Pages Log Page: May Support 00:44:45.483 Commands Supported & Effects Log Page: Not Supported 00:44:45.483 Feature Identifiers & Effects Log Page:May Support 00:44:45.483 NVMe-MI Commands & Effects Log Page: May Support 00:44:45.483 Data Area 4 for Telemetry Log: Not Supported 00:44:45.483 Error Log Page Entries Supported: 1 00:44:45.483 Keep Alive: Not Supported 00:44:45.483 00:44:45.483 NVM Command Set Attributes 00:44:45.483 ========================== 00:44:45.483 Submission Queue Entry Size 00:44:45.483 Max: 64 00:44:45.483 Min: 64 00:44:45.483 Completion Queue Entry Size 00:44:45.483 Max: 16 00:44:45.483 Min: 16 00:44:45.483 Number of Namespaces: 256 00:44:45.483 Compare Command: Supported 00:44:45.483 Write Uncorrectable Command: Not Supported 00:44:45.483 Dataset Management Command: Supported 00:44:45.483 Write Zeroes Command: Supported 00:44:45.483 Set Features Save Field: Supported 00:44:45.483 Reservations: Not Supported 00:44:45.483 Timestamp: Supported 00:44:45.483 Copy: Supported 00:44:45.483 Volatile Write Cache: Present 00:44:45.483 Atomic Write Unit (Normal): 1 00:44:45.483 Atomic Write Unit (PFail): 1 00:44:45.483 Atomic Compare & Write Unit: 1 00:44:45.483 Fused Compare & Write: Not Supported 00:44:45.483 Scatter-Gather List 00:44:45.483 SGL Command Set: Supported 00:44:45.483 SGL Keyed: Not Supported 00:44:45.483 SGL Bit Bucket Descriptor: Not Supported 00:44:45.483 SGL Metadata Pointer: Not Supported 00:44:45.483 Oversized SGL: Not Supported 00:44:45.483 SGL Metadata Address: Not Supported 00:44:45.483 SGL Offset: Not Supported 00:44:45.483 Transport SGL Data Block: Not Supported 00:44:45.483 Replay Protected Memory Block: Not Supported 00:44:45.483 00:44:45.483 Firmware Slot Information 00:44:45.483 ========================= 00:44:45.483 Active slot: 1 00:44:45.483 Slot 1 Firmware Revision: 1.0 00:44:45.483 00:44:45.483 00:44:45.483 Commands Supported and Effects 00:44:45.483 ============================== 00:44:45.483 Admin Commands 00:44:45.483 -------------- 00:44:45.483 Delete I/O Submission Queue (00h): Supported 00:44:45.483 Create I/O Submission Queue (01h): Supported 00:44:45.483 Get Log Page (02h): Supported 00:44:45.483 Delete I/O Completion Queue (04h): Supported 00:44:45.483 Create I/O Completion Queue (05h): Supported 00:44:45.483 Identify (06h): Supported 00:44:45.483 Abort (08h): Supported 00:44:45.483 Set Features (09h): Supported 00:44:45.483 Get Features (0Ah): Supported 00:44:45.483 Asynchronous Event Request (0Ch): Supported 00:44:45.483 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:45.483 Directive Send (19h): Supported 00:44:45.483 Directive Receive (1Ah): Supported 00:44:45.483 Virtualization Management (1Ch): Supported 00:44:45.483 Doorbell Buffer Config (7Ch): Supported 00:44:45.483 Format NVM (80h): Supported LBA-Change 00:44:45.483 I/O Commands 00:44:45.483 ------------ 00:44:45.483 Flush (00h): Supported LBA-Change 00:44:45.483 Write (01h): Supported LBA-Change 00:44:45.483 Read (02h): Supported 00:44:45.483 Compare (05h): Supported 00:44:45.483 Write Zeroes (08h): Supported LBA-Change 00:44:45.483 Dataset Management (09h): Supported LBA-Change 00:44:45.483 Unknown (0Ch): Supported 00:44:45.483 Unknown (12h): Supported 00:44:45.483 Copy (19h): Supported LBA-Change 00:44:45.483 Unknown (1Dh): Supported LBA-Change 00:44:45.484 00:44:45.484 Error Log 00:44:45.484 ========= 00:44:45.484 00:44:45.484 Arbitration 00:44:45.484 =========== 00:44:45.484 Arbitration Burst: no limit 00:44:45.484 00:44:45.484 Power Management 00:44:45.484 ================ 00:44:45.484 Number of Power States: 1 00:44:45.484 Current Power State: Power State #0 00:44:45.484 Power State #0: 00:44:45.484 Max Power: 25.00 W 00:44:45.484 Non-Operational State: Operational 00:44:45.484 Entry Latency: 16 microseconds 00:44:45.484 Exit Latency: 4 microseconds 00:44:45.484 Relative Read Throughput: 0 00:44:45.484 Relative Read Latency: 0 00:44:45.484 Relative Write Throughput: 0 00:44:45.484 Relative Write Latency: 0 00:44:45.484 Idle Power: Not Reported 00:44:45.484 Active Power: Not Reported 00:44:45.484 Non-Operational Permissive Mode: Not Supported 00:44:45.484 00:44:45.484 Health Information 00:44:45.484 ================== 00:44:45.484 Critical Warnings: 00:44:45.484 Available Spare Space: OK 00:44:45.484 Temperature: OK 00:44:45.484 Device Reliability: OK 00:44:45.484 Read Only: No 00:44:45.484 Volatile Memory Backup: OK 00:44:45.484 Current Temperature: 323 Kelvin (50 Celsius) 00:44:45.484 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:45.484 Available Spare: 0% 00:44:45.484 Available Spare Threshold: 0% 00:44:45.484 Life Percentage Used: 0% 00:44:45.484 Data Units Read: 2070 00:44:45.484 Data Units Written: 1858 00:44:45.484 Host Read Commands: 95753 00:44:45.484 Host Write Commands: 94064 00:44:45.484 Controller Busy Time: 0 minutes 00:44:45.484 Power Cycles: 0 00:44:45.484 Power On Hours: 0 hours 00:44:45.484 Unsafe Shutdowns: 0 00:44:45.484 Unrecoverable Media Errors: 0 00:44:45.484 Lifetime Error Log Entries: 0 00:44:45.484 Warning Temperature Time: 0 minutes 00:44:45.484 Critical Temperature Time: 0 minutes 00:44:45.484 00:44:45.484 Number of Queues 00:44:45.484 ================ 00:44:45.484 Number of I/O Submission Queues: 64 00:44:45.484 Number of I/O Completion Queues: 64 00:44:45.484 00:44:45.484 ZNS Specific Controller Data 00:44:45.484 ============================ 00:44:45.484 Zone Append Size Limit: 0 00:44:45.484 00:44:45.484 00:44:45.484 Active Namespaces 00:44:45.484 ================= 00:44:45.484 Namespace ID:1 00:44:45.484 Error Recovery Timeout: Unlimited 00:44:45.484 Command Set Identifier: NVM (00h) 00:44:45.484 Deallocate: Supported 00:44:45.484 Deallocated/Unwritten Error: Supported 00:44:45.484 Deallocated Read Value: All 0x00 00:44:45.484 Deallocate in Write Zeroes: Not Supported 00:44:45.484 Deallocated Guard Field: 0xFFFF 00:44:45.484 Flush: Supported 00:44:45.484 Reservation: Not Supported 00:44:45.484 Namespace Sharing Capabilities: Private 00:44:45.484 Size (in LBAs): 1048576 (4GiB) 00:44:45.484 Capacity (in LBAs): 1048576 (4GiB) 00:44:45.484 Utilization (in LBAs): 1048576 (4GiB) 00:44:45.484 Thin Provisioning: Not Supported 00:44:45.484 Per-NS Atomic Units: No 00:44:45.484 Maximum Single Source Range Length: 128 00:44:45.484 Maximum Copy Length: 128 00:44:45.484 Maximum Source Range Count: 128 00:44:45.484 NGUID/EUI64 Never Reused: No 00:44:45.484 Namespace Write Protected: No 00:44:45.484 Number of LBA Formats: 8 00:44:45.484 Current LBA Format: LBA Format #04 00:44:45.484 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:45.484 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:45.484 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:45.484 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:45.484 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:45.484 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:45.484 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:45.484 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:45.484 00:44:45.484 NVM Specific Namespace Data 00:44:45.484 =========================== 00:44:45.484 Logical Block Storage Tag Mask: 0 00:44:45.484 Protection Information Capabilities: 00:44:45.484 16b Guard Protection Information Storage Tag Support: No 00:44:45.484 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:45.484 Storage Tag Check Read Support: No 00:44:45.484 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Namespace ID:2 00:44:45.484 Error Recovery Timeout: Unlimited 00:44:45.484 Command Set Identifier: NVM (00h) 00:44:45.484 Deallocate: Supported 00:44:45.484 Deallocated/Unwritten Error: Supported 00:44:45.484 Deallocated Read Value: All 0x00 00:44:45.484 Deallocate in Write Zeroes: Not Supported 00:44:45.484 Deallocated Guard Field: 0xFFFF 00:44:45.484 Flush: Supported 00:44:45.484 Reservation: Not Supported 00:44:45.484 Namespace Sharing Capabilities: Private 00:44:45.484 Size (in LBAs): 1048576 (4GiB) 00:44:45.484 Capacity (in LBAs): 1048576 (4GiB) 00:44:45.484 Utilization (in LBAs): 1048576 (4GiB) 00:44:45.484 Thin Provisioning: Not Supported 00:44:45.484 Per-NS Atomic Units: No 00:44:45.484 Maximum Single Source Range Length: 128 00:44:45.484 Maximum Copy Length: 128 00:44:45.484 Maximum Source Range Count: 128 00:44:45.484 NGUID/EUI64 Never Reused: No 00:44:45.484 Namespace Write Protected: No 00:44:45.484 Number of LBA Formats: 8 00:44:45.484 Current LBA Format: LBA Format #04 00:44:45.484 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:45.484 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:45.484 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:45.484 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:45.484 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:45.484 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:45.484 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:45.484 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:45.484 00:44:45.484 NVM Specific Namespace Data 00:44:45.484 =========================== 00:44:45.484 Logical Block Storage Tag Mask: 0 00:44:45.484 Protection Information Capabilities: 00:44:45.484 16b Guard Protection Information Storage Tag Support: No 00:44:45.484 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:45.484 Storage Tag Check Read Support: No 00:44:45.484 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.484 Namespace ID:3 00:44:45.484 Error Recovery Timeout: Unlimited 00:44:45.484 Command Set Identifier: NVM (00h) 00:44:45.484 Deallocate: Supported 00:44:45.484 Deallocated/Unwritten Error: Supported 00:44:45.484 Deallocated Read Value: All 0x00 00:44:45.484 Deallocate in Write Zeroes: Not Supported 00:44:45.484 Deallocated Guard Field: 0xFFFF 00:44:45.484 Flush: Supported 00:44:45.484 Reservation: Not Supported 00:44:45.484 Namespace Sharing Capabilities: Private 00:44:45.484 Size (in LBAs): 1048576 (4GiB) 00:44:45.743 Capacity (in LBAs): 1048576 (4GiB) 00:44:45.743 Utilization (in LBAs): 1048576 (4GiB) 00:44:45.743 Thin Provisioning: Not Supported 00:44:45.743 Per-NS Atomic Units: No 00:44:45.743 Maximum Single Source Range Length: 128 00:44:45.743 Maximum Copy Length: 128 00:44:45.743 Maximum Source Range Count: 128 00:44:45.743 NGUID/EUI64 Never Reused: No 00:44:45.743 Namespace Write Protected: No 00:44:45.743 Number of LBA Formats: 8 00:44:45.743 Current LBA Format: LBA Format #04 00:44:45.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:45.743 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:45.743 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:45.743 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:45.743 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:45.743 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:45.743 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:45.743 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:45.743 00:44:45.743 NVM Specific Namespace Data 00:44:45.743 =========================== 00:44:45.743 Logical Block Storage Tag Mask: 0 00:44:45.743 Protection Information Capabilities: 00:44:45.743 16b Guard Protection Information Storage Tag Support: No 00:44:45.743 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:45.743 Storage Tag Check Read Support: No 00:44:45.743 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:45.743 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:44:45.743 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:44:46.003 ===================================================== 00:44:46.003 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:46.003 ===================================================== 00:44:46.003 Controller Capabilities/Features 00:44:46.003 ================================ 00:44:46.003 Vendor ID: 1b36 00:44:46.003 Subsystem Vendor ID: 1af4 00:44:46.003 Serial Number: 12340 00:44:46.003 Model Number: QEMU NVMe Ctrl 00:44:46.003 Firmware Version: 8.0.0 00:44:46.003 Recommended Arb Burst: 6 00:44:46.003 IEEE OUI Identifier: 00 54 52 00:44:46.003 Multi-path I/O 00:44:46.003 May have multiple subsystem ports: No 00:44:46.003 May have multiple controllers: No 00:44:46.003 Associated with SR-IOV VF: No 00:44:46.003 Max Data Transfer Size: 524288 00:44:46.003 Max Number of Namespaces: 256 00:44:46.003 Max Number of I/O Queues: 64 00:44:46.003 NVMe Specification Version (VS): 1.4 00:44:46.003 NVMe Specification Version (Identify): 1.4 00:44:46.003 Maximum Queue Entries: 2048 00:44:46.003 Contiguous Queues Required: Yes 00:44:46.003 Arbitration Mechanisms Supported 00:44:46.003 Weighted Round Robin: Not Supported 00:44:46.003 Vendor Specific: Not Supported 00:44:46.003 Reset Timeout: 7500 ms 00:44:46.003 Doorbell Stride: 4 bytes 00:44:46.003 NVM Subsystem Reset: Not Supported 00:44:46.003 Command Sets Supported 00:44:46.003 NVM Command Set: Supported 00:44:46.003 Boot Partition: Not Supported 00:44:46.003 Memory Page Size Minimum: 4096 bytes 00:44:46.003 Memory Page Size Maximum: 65536 bytes 00:44:46.003 Persistent Memory Region: Not Supported 00:44:46.003 Optional Asynchronous Events Supported 00:44:46.003 Namespace Attribute Notices: Supported 00:44:46.003 Firmware Activation Notices: Not Supported 00:44:46.003 ANA Change Notices: Not Supported 00:44:46.003 PLE Aggregate Log Change Notices: Not Supported 00:44:46.003 LBA Status Info Alert Notices: Not Supported 00:44:46.003 EGE Aggregate Log Change Notices: Not Supported 00:44:46.003 Normal NVM Subsystem Shutdown event: Not Supported 00:44:46.003 Zone Descriptor Change Notices: Not Supported 00:44:46.003 Discovery Log Change Notices: Not Supported 00:44:46.003 Controller Attributes 00:44:46.003 128-bit Host Identifier: Not Supported 00:44:46.003 Non-Operational Permissive Mode: Not Supported 00:44:46.003 NVM Sets: Not Supported 00:44:46.003 Read Recovery Levels: Not Supported 00:44:46.003 Endurance Groups: Not Supported 00:44:46.003 Predictable Latency Mode: Not Supported 00:44:46.003 Traffic Based Keep ALive: Not Supported 00:44:46.003 Namespace Granularity: Not Supported 00:44:46.003 SQ Associations: Not Supported 00:44:46.003 UUID List: Not Supported 00:44:46.003 Multi-Domain Subsystem: Not Supported 00:44:46.003 Fixed Capacity Management: Not Supported 00:44:46.003 Variable Capacity Management: Not Supported 00:44:46.003 Delete Endurance Group: Not Supported 00:44:46.003 Delete NVM Set: Not Supported 00:44:46.003 Extended LBA Formats Supported: Supported 00:44:46.003 Flexible Data Placement Supported: Not Supported 00:44:46.003 00:44:46.003 Controller Memory Buffer Support 00:44:46.003 ================================ 00:44:46.003 Supported: No 00:44:46.003 00:44:46.003 Persistent Memory Region Support 00:44:46.003 ================================ 00:44:46.003 Supported: No 00:44:46.003 00:44:46.003 Admin Command Set Attributes 00:44:46.003 ============================ 00:44:46.003 Security Send/Receive: Not Supported 00:44:46.003 Format NVM: Supported 00:44:46.003 Firmware Activate/Download: Not Supported 00:44:46.003 Namespace Management: Supported 00:44:46.003 Device Self-Test: Not Supported 00:44:46.003 Directives: Supported 00:44:46.003 NVMe-MI: Not Supported 00:44:46.003 Virtualization Management: Not Supported 00:44:46.003 Doorbell Buffer Config: Supported 00:44:46.003 Get LBA Status Capability: Not Supported 00:44:46.003 Command & Feature Lockdown Capability: Not Supported 00:44:46.003 Abort Command Limit: 4 00:44:46.003 Async Event Request Limit: 4 00:44:46.003 Number of Firmware Slots: N/A 00:44:46.003 Firmware Slot 1 Read-Only: N/A 00:44:46.003 Firmware Activation Without Reset: N/A 00:44:46.003 Multiple Update Detection Support: N/A 00:44:46.003 Firmware Update Granularity: No Information Provided 00:44:46.003 Per-Namespace SMART Log: Yes 00:44:46.003 Asymmetric Namespace Access Log Page: Not Supported 00:44:46.003 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:44:46.003 Command Effects Log Page: Supported 00:44:46.003 Get Log Page Extended Data: Supported 00:44:46.003 Telemetry Log Pages: Not Supported 00:44:46.003 Persistent Event Log Pages: Not Supported 00:44:46.003 Supported Log Pages Log Page: May Support 00:44:46.003 Commands Supported & Effects Log Page: Not Supported 00:44:46.003 Feature Identifiers & Effects Log Page:May Support 00:44:46.003 NVMe-MI Commands & Effects Log Page: May Support 00:44:46.003 Data Area 4 for Telemetry Log: Not Supported 00:44:46.003 Error Log Page Entries Supported: 1 00:44:46.003 Keep Alive: Not Supported 00:44:46.003 00:44:46.003 NVM Command Set Attributes 00:44:46.003 ========================== 00:44:46.003 Submission Queue Entry Size 00:44:46.003 Max: 64 00:44:46.003 Min: 64 00:44:46.003 Completion Queue Entry Size 00:44:46.003 Max: 16 00:44:46.003 Min: 16 00:44:46.003 Number of Namespaces: 256 00:44:46.003 Compare Command: Supported 00:44:46.003 Write Uncorrectable Command: Not Supported 00:44:46.003 Dataset Management Command: Supported 00:44:46.003 Write Zeroes Command: Supported 00:44:46.003 Set Features Save Field: Supported 00:44:46.003 Reservations: Not Supported 00:44:46.003 Timestamp: Supported 00:44:46.003 Copy: Supported 00:44:46.003 Volatile Write Cache: Present 00:44:46.003 Atomic Write Unit (Normal): 1 00:44:46.003 Atomic Write Unit (PFail): 1 00:44:46.003 Atomic Compare & Write Unit: 1 00:44:46.003 Fused Compare & Write: Not Supported 00:44:46.003 Scatter-Gather List 00:44:46.003 SGL Command Set: Supported 00:44:46.004 SGL Keyed: Not Supported 00:44:46.004 SGL Bit Bucket Descriptor: Not Supported 00:44:46.004 SGL Metadata Pointer: Not Supported 00:44:46.004 Oversized SGL: Not Supported 00:44:46.004 SGL Metadata Address: Not Supported 00:44:46.004 SGL Offset: Not Supported 00:44:46.004 Transport SGL Data Block: Not Supported 00:44:46.004 Replay Protected Memory Block: Not Supported 00:44:46.004 00:44:46.004 Firmware Slot Information 00:44:46.004 ========================= 00:44:46.004 Active slot: 1 00:44:46.004 Slot 1 Firmware Revision: 1.0 00:44:46.004 00:44:46.004 00:44:46.004 Commands Supported and Effects 00:44:46.004 ============================== 00:44:46.004 Admin Commands 00:44:46.004 -------------- 00:44:46.004 Delete I/O Submission Queue (00h): Supported 00:44:46.004 Create I/O Submission Queue (01h): Supported 00:44:46.004 Get Log Page (02h): Supported 00:44:46.004 Delete I/O Completion Queue (04h): Supported 00:44:46.004 Create I/O Completion Queue (05h): Supported 00:44:46.004 Identify (06h): Supported 00:44:46.004 Abort (08h): Supported 00:44:46.004 Set Features (09h): Supported 00:44:46.004 Get Features (0Ah): Supported 00:44:46.004 Asynchronous Event Request (0Ch): Supported 00:44:46.004 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:46.004 Directive Send (19h): Supported 00:44:46.004 Directive Receive (1Ah): Supported 00:44:46.004 Virtualization Management (1Ch): Supported 00:44:46.004 Doorbell Buffer Config (7Ch): Supported 00:44:46.004 Format NVM (80h): Supported LBA-Change 00:44:46.004 I/O Commands 00:44:46.004 ------------ 00:44:46.004 Flush (00h): Supported LBA-Change 00:44:46.004 Write (01h): Supported LBA-Change 00:44:46.004 Read (02h): Supported 00:44:46.004 Compare (05h): Supported 00:44:46.004 Write Zeroes (08h): Supported LBA-Change 00:44:46.004 Dataset Management (09h): Supported LBA-Change 00:44:46.004 Unknown (0Ch): Supported 00:44:46.004 Unknown (12h): Supported 00:44:46.004 Copy (19h): Supported LBA-Change 00:44:46.004 Unknown (1Dh): Supported LBA-Change 00:44:46.004 00:44:46.004 Error Log 00:44:46.004 ========= 00:44:46.004 00:44:46.004 Arbitration 00:44:46.004 =========== 00:44:46.004 Arbitration Burst: no limit 00:44:46.004 00:44:46.004 Power Management 00:44:46.004 ================ 00:44:46.004 Number of Power States: 1 00:44:46.004 Current Power State: Power State #0 00:44:46.004 Power State #0: 00:44:46.004 Max Power: 25.00 W 00:44:46.004 Non-Operational State: Operational 00:44:46.004 Entry Latency: 16 microseconds 00:44:46.004 Exit Latency: 4 microseconds 00:44:46.004 Relative Read Throughput: 0 00:44:46.004 Relative Read Latency: 0 00:44:46.004 Relative Write Throughput: 0 00:44:46.004 Relative Write Latency: 0 00:44:46.004 Idle Power: Not Reported 00:44:46.004 Active Power: Not Reported 00:44:46.004 Non-Operational Permissive Mode: Not Supported 00:44:46.004 00:44:46.004 Health Information 00:44:46.004 ================== 00:44:46.004 Critical Warnings: 00:44:46.004 Available Spare Space: OK 00:44:46.004 Temperature: OK 00:44:46.004 Device Reliability: OK 00:44:46.004 Read Only: No 00:44:46.004 Volatile Memory Backup: OK 00:44:46.004 Current Temperature: 323 Kelvin (50 Celsius) 00:44:46.004 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:46.004 Available Spare: 0% 00:44:46.004 Available Spare Threshold: 0% 00:44:46.004 Life Percentage Used: 0% 00:44:46.004 Data Units Read: 654 00:44:46.004 Data Units Written: 582 00:44:46.004 Host Read Commands: 31347 00:44:46.004 Host Write Commands: 31133 00:44:46.004 Controller Busy Time: 0 minutes 00:44:46.004 Power Cycles: 0 00:44:46.004 Power On Hours: 0 hours 00:44:46.004 Unsafe Shutdowns: 0 00:44:46.004 Unrecoverable Media Errors: 0 00:44:46.004 Lifetime Error Log Entries: 0 00:44:46.004 Warning Temperature Time: 0 minutes 00:44:46.004 Critical Temperature Time: 0 minutes 00:44:46.004 00:44:46.004 Number of Queues 00:44:46.004 ================ 00:44:46.004 Number of I/O Submission Queues: 64 00:44:46.004 Number of I/O Completion Queues: 64 00:44:46.004 00:44:46.004 ZNS Specific Controller Data 00:44:46.004 ============================ 00:44:46.004 Zone Append Size Limit: 0 00:44:46.004 00:44:46.004 00:44:46.004 Active Namespaces 00:44:46.004 ================= 00:44:46.004 Namespace ID:1 00:44:46.004 Error Recovery Timeout: Unlimited 00:44:46.004 Command Set Identifier: NVM (00h) 00:44:46.004 Deallocate: Supported 00:44:46.004 Deallocated/Unwritten Error: Supported 00:44:46.004 Deallocated Read Value: All 0x00 00:44:46.004 Deallocate in Write Zeroes: Not Supported 00:44:46.004 Deallocated Guard Field: 0xFFFF 00:44:46.004 Flush: Supported 00:44:46.004 Reservation: Not Supported 00:44:46.004 Metadata Transferred as: Separate Metadata Buffer 00:44:46.004 Namespace Sharing Capabilities: Private 00:44:46.004 Size (in LBAs): 1548666 (5GiB) 00:44:46.004 Capacity (in LBAs): 1548666 (5GiB) 00:44:46.004 Utilization (in LBAs): 1548666 (5GiB) 00:44:46.004 Thin Provisioning: Not Supported 00:44:46.004 Per-NS Atomic Units: No 00:44:46.004 Maximum Single Source Range Length: 128 00:44:46.004 Maximum Copy Length: 128 00:44:46.004 Maximum Source Range Count: 128 00:44:46.004 NGUID/EUI64 Never Reused: No 00:44:46.004 Namespace Write Protected: No 00:44:46.004 Number of LBA Formats: 8 00:44:46.004 Current LBA Format: LBA Format #07 00:44:46.004 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:46.004 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:46.004 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:46.004 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:46.004 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:46.004 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:46.004 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:46.004 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:46.004 00:44:46.004 NVM Specific Namespace Data 00:44:46.004 =========================== 00:44:46.004 Logical Block Storage Tag Mask: 0 00:44:46.004 Protection Information Capabilities: 00:44:46.004 16b Guard Protection Information Storage Tag Support: No 00:44:46.004 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:46.004 Storage Tag Check Read Support: No 00:44:46.004 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.004 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:44:46.004 09:55:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:44:46.264 ===================================================== 00:44:46.264 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:44:46.264 ===================================================== 00:44:46.264 Controller Capabilities/Features 00:44:46.264 ================================ 00:44:46.264 Vendor ID: 1b36 00:44:46.264 Subsystem Vendor ID: 1af4 00:44:46.264 Serial Number: 12341 00:44:46.264 Model Number: QEMU NVMe Ctrl 00:44:46.264 Firmware Version: 8.0.0 00:44:46.264 Recommended Arb Burst: 6 00:44:46.264 IEEE OUI Identifier: 00 54 52 00:44:46.264 Multi-path I/O 00:44:46.264 May have multiple subsystem ports: No 00:44:46.264 May have multiple controllers: No 00:44:46.264 Associated with SR-IOV VF: No 00:44:46.264 Max Data Transfer Size: 524288 00:44:46.264 Max Number of Namespaces: 256 00:44:46.264 Max Number of I/O Queues: 64 00:44:46.264 NVMe Specification Version (VS): 1.4 00:44:46.264 NVMe Specification Version (Identify): 1.4 00:44:46.264 Maximum Queue Entries: 2048 00:44:46.264 Contiguous Queues Required: Yes 00:44:46.264 Arbitration Mechanisms Supported 00:44:46.264 Weighted Round Robin: Not Supported 00:44:46.264 Vendor Specific: Not Supported 00:44:46.264 Reset Timeout: 7500 ms 00:44:46.264 Doorbell Stride: 4 bytes 00:44:46.264 NVM Subsystem Reset: Not Supported 00:44:46.264 Command Sets Supported 00:44:46.264 NVM Command Set: Supported 00:44:46.264 Boot Partition: Not Supported 00:44:46.264 Memory Page Size Minimum: 4096 bytes 00:44:46.264 Memory Page Size Maximum: 65536 bytes 00:44:46.264 Persistent Memory Region: Not Supported 00:44:46.264 Optional Asynchronous Events Supported 00:44:46.264 Namespace Attribute Notices: Supported 00:44:46.264 Firmware Activation Notices: Not Supported 00:44:46.264 ANA Change Notices: Not Supported 00:44:46.264 PLE Aggregate Log Change Notices: Not Supported 00:44:46.264 LBA Status Info Alert Notices: Not Supported 00:44:46.264 EGE Aggregate Log Change Notices: Not Supported 00:44:46.264 Normal NVM Subsystem Shutdown event: Not Supported 00:44:46.264 Zone Descriptor Change Notices: Not Supported 00:44:46.264 Discovery Log Change Notices: Not Supported 00:44:46.264 Controller Attributes 00:44:46.264 128-bit Host Identifier: Not Supported 00:44:46.264 Non-Operational Permissive Mode: Not Supported 00:44:46.264 NVM Sets: Not Supported 00:44:46.264 Read Recovery Levels: Not Supported 00:44:46.264 Endurance Groups: Not Supported 00:44:46.264 Predictable Latency Mode: Not Supported 00:44:46.264 Traffic Based Keep ALive: Not Supported 00:44:46.264 Namespace Granularity: Not Supported 00:44:46.264 SQ Associations: Not Supported 00:44:46.264 UUID List: Not Supported 00:44:46.264 Multi-Domain Subsystem: Not Supported 00:44:46.264 Fixed Capacity Management: Not Supported 00:44:46.264 Variable Capacity Management: Not Supported 00:44:46.264 Delete Endurance Group: Not Supported 00:44:46.264 Delete NVM Set: Not Supported 00:44:46.264 Extended LBA Formats Supported: Supported 00:44:46.264 Flexible Data Placement Supported: Not Supported 00:44:46.264 00:44:46.264 Controller Memory Buffer Support 00:44:46.264 ================================ 00:44:46.264 Supported: No 00:44:46.264 00:44:46.264 Persistent Memory Region Support 00:44:46.264 ================================ 00:44:46.264 Supported: No 00:44:46.264 00:44:46.264 Admin Command Set Attributes 00:44:46.264 ============================ 00:44:46.264 Security Send/Receive: Not Supported 00:44:46.264 Format NVM: Supported 00:44:46.264 Firmware Activate/Download: Not Supported 00:44:46.264 Namespace Management: Supported 00:44:46.264 Device Self-Test: Not Supported 00:44:46.264 Directives: Supported 00:44:46.264 NVMe-MI: Not Supported 00:44:46.264 Virtualization Management: Not Supported 00:44:46.264 Doorbell Buffer Config: Supported 00:44:46.264 Get LBA Status Capability: Not Supported 00:44:46.264 Command & Feature Lockdown Capability: Not Supported 00:44:46.264 Abort Command Limit: 4 00:44:46.264 Async Event Request Limit: 4 00:44:46.264 Number of Firmware Slots: N/A 00:44:46.264 Firmware Slot 1 Read-Only: N/A 00:44:46.264 Firmware Activation Without Reset: N/A 00:44:46.264 Multiple Update Detection Support: N/A 00:44:46.264 Firmware Update Granularity: No Information Provided 00:44:46.264 Per-Namespace SMART Log: Yes 00:44:46.264 Asymmetric Namespace Access Log Page: Not Supported 00:44:46.264 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:44:46.264 Command Effects Log Page: Supported 00:44:46.264 Get Log Page Extended Data: Supported 00:44:46.264 Telemetry Log Pages: Not Supported 00:44:46.264 Persistent Event Log Pages: Not Supported 00:44:46.264 Supported Log Pages Log Page: May Support 00:44:46.264 Commands Supported & Effects Log Page: Not Supported 00:44:46.264 Feature Identifiers & Effects Log Page:May Support 00:44:46.264 NVMe-MI Commands & Effects Log Page: May Support 00:44:46.264 Data Area 4 for Telemetry Log: Not Supported 00:44:46.264 Error Log Page Entries Supported: 1 00:44:46.264 Keep Alive: Not Supported 00:44:46.264 00:44:46.264 NVM Command Set Attributes 00:44:46.264 ========================== 00:44:46.264 Submission Queue Entry Size 00:44:46.264 Max: 64 00:44:46.264 Min: 64 00:44:46.264 Completion Queue Entry Size 00:44:46.264 Max: 16 00:44:46.264 Min: 16 00:44:46.264 Number of Namespaces: 256 00:44:46.264 Compare Command: Supported 00:44:46.264 Write Uncorrectable Command: Not Supported 00:44:46.264 Dataset Management Command: Supported 00:44:46.264 Write Zeroes Command: Supported 00:44:46.264 Set Features Save Field: Supported 00:44:46.264 Reservations: Not Supported 00:44:46.264 Timestamp: Supported 00:44:46.264 Copy: Supported 00:44:46.264 Volatile Write Cache: Present 00:44:46.264 Atomic Write Unit (Normal): 1 00:44:46.264 Atomic Write Unit (PFail): 1 00:44:46.264 Atomic Compare & Write Unit: 1 00:44:46.264 Fused Compare & Write: Not Supported 00:44:46.264 Scatter-Gather List 00:44:46.264 SGL Command Set: Supported 00:44:46.264 SGL Keyed: Not Supported 00:44:46.264 SGL Bit Bucket Descriptor: Not Supported 00:44:46.264 SGL Metadata Pointer: Not Supported 00:44:46.264 Oversized SGL: Not Supported 00:44:46.264 SGL Metadata Address: Not Supported 00:44:46.264 SGL Offset: Not Supported 00:44:46.264 Transport SGL Data Block: Not Supported 00:44:46.264 Replay Protected Memory Block: Not Supported 00:44:46.264 00:44:46.264 Firmware Slot Information 00:44:46.264 ========================= 00:44:46.264 Active slot: 1 00:44:46.264 Slot 1 Firmware Revision: 1.0 00:44:46.264 00:44:46.264 00:44:46.264 Commands Supported and Effects 00:44:46.264 ============================== 00:44:46.264 Admin Commands 00:44:46.264 -------------- 00:44:46.264 Delete I/O Submission Queue (00h): Supported 00:44:46.264 Create I/O Submission Queue (01h): Supported 00:44:46.264 Get Log Page (02h): Supported 00:44:46.264 Delete I/O Completion Queue (04h): Supported 00:44:46.264 Create I/O Completion Queue (05h): Supported 00:44:46.264 Identify (06h): Supported 00:44:46.264 Abort (08h): Supported 00:44:46.264 Set Features (09h): Supported 00:44:46.264 Get Features (0Ah): Supported 00:44:46.264 Asynchronous Event Request (0Ch): Supported 00:44:46.264 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:46.264 Directive Send (19h): Supported 00:44:46.264 Directive Receive (1Ah): Supported 00:44:46.264 Virtualization Management (1Ch): Supported 00:44:46.264 Doorbell Buffer Config (7Ch): Supported 00:44:46.264 Format NVM (80h): Supported LBA-Change 00:44:46.264 I/O Commands 00:44:46.264 ------------ 00:44:46.264 Flush (00h): Supported LBA-Change 00:44:46.264 Write (01h): Supported LBA-Change 00:44:46.265 Read (02h): Supported 00:44:46.265 Compare (05h): Supported 00:44:46.265 Write Zeroes (08h): Supported LBA-Change 00:44:46.265 Dataset Management (09h): Supported LBA-Change 00:44:46.265 Unknown (0Ch): Supported 00:44:46.265 Unknown (12h): Supported 00:44:46.265 Copy (19h): Supported LBA-Change 00:44:46.265 Unknown (1Dh): Supported LBA-Change 00:44:46.265 00:44:46.265 Error Log 00:44:46.265 ========= 00:44:46.265 00:44:46.265 Arbitration 00:44:46.265 =========== 00:44:46.265 Arbitration Burst: no limit 00:44:46.265 00:44:46.265 Power Management 00:44:46.265 ================ 00:44:46.265 Number of Power States: 1 00:44:46.265 Current Power State: Power State #0 00:44:46.265 Power State #0: 00:44:46.265 Max Power: 25.00 W 00:44:46.265 Non-Operational State: Operational 00:44:46.265 Entry Latency: 16 microseconds 00:44:46.265 Exit Latency: 4 microseconds 00:44:46.265 Relative Read Throughput: 0 00:44:46.265 Relative Read Latency: 0 00:44:46.265 Relative Write Throughput: 0 00:44:46.265 Relative Write Latency: 0 00:44:46.265 Idle Power: Not Reported 00:44:46.265 Active Power: Not Reported 00:44:46.265 Non-Operational Permissive Mode: Not Supported 00:44:46.265 00:44:46.265 Health Information 00:44:46.265 ================== 00:44:46.265 Critical Warnings: 00:44:46.265 Available Spare Space: OK 00:44:46.265 Temperature: OK 00:44:46.265 Device Reliability: OK 00:44:46.265 Read Only: No 00:44:46.265 Volatile Memory Backup: OK 00:44:46.265 Current Temperature: 323 Kelvin (50 Celsius) 00:44:46.265 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:46.265 Available Spare: 0% 00:44:46.265 Available Spare Threshold: 0% 00:44:46.265 Life Percentage Used: 0% 00:44:46.265 Data Units Read: 988 00:44:46.265 Data Units Written: 854 00:44:46.265 Host Read Commands: 46392 00:44:46.265 Host Write Commands: 45202 00:44:46.265 Controller Busy Time: 0 minutes 00:44:46.265 Power Cycles: 0 00:44:46.265 Power On Hours: 0 hours 00:44:46.265 Unsafe Shutdowns: 0 00:44:46.265 Unrecoverable Media Errors: 0 00:44:46.265 Lifetime Error Log Entries: 0 00:44:46.265 Warning Temperature Time: 0 minutes 00:44:46.265 Critical Temperature Time: 0 minutes 00:44:46.265 00:44:46.265 Number of Queues 00:44:46.265 ================ 00:44:46.265 Number of I/O Submission Queues: 64 00:44:46.265 Number of I/O Completion Queues: 64 00:44:46.265 00:44:46.265 ZNS Specific Controller Data 00:44:46.265 ============================ 00:44:46.265 Zone Append Size Limit: 0 00:44:46.265 00:44:46.265 00:44:46.265 Active Namespaces 00:44:46.265 ================= 00:44:46.265 Namespace ID:1 00:44:46.265 Error Recovery Timeout: Unlimited 00:44:46.265 Command Set Identifier: NVM (00h) 00:44:46.265 Deallocate: Supported 00:44:46.265 Deallocated/Unwritten Error: Supported 00:44:46.265 Deallocated Read Value: All 0x00 00:44:46.265 Deallocate in Write Zeroes: Not Supported 00:44:46.265 Deallocated Guard Field: 0xFFFF 00:44:46.265 Flush: Supported 00:44:46.265 Reservation: Not Supported 00:44:46.265 Namespace Sharing Capabilities: Private 00:44:46.265 Size (in LBAs): 1310720 (5GiB) 00:44:46.265 Capacity (in LBAs): 1310720 (5GiB) 00:44:46.265 Utilization (in LBAs): 1310720 (5GiB) 00:44:46.265 Thin Provisioning: Not Supported 00:44:46.265 Per-NS Atomic Units: No 00:44:46.265 Maximum Single Source Range Length: 128 00:44:46.265 Maximum Copy Length: 128 00:44:46.265 Maximum Source Range Count: 128 00:44:46.265 NGUID/EUI64 Never Reused: No 00:44:46.265 Namespace Write Protected: No 00:44:46.265 Number of LBA Formats: 8 00:44:46.265 Current LBA Format: LBA Format #04 00:44:46.265 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:46.265 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:46.265 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:46.265 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:46.265 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:46.265 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:46.265 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:46.265 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:46.265 00:44:46.265 NVM Specific Namespace Data 00:44:46.265 =========================== 00:44:46.265 Logical Block Storage Tag Mask: 0 00:44:46.265 Protection Information Capabilities: 00:44:46.265 16b Guard Protection Information Storage Tag Support: No 00:44:46.265 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:46.265 Storage Tag Check Read Support: No 00:44:46.265 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.265 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.523 09:55:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:44:46.523 09:55:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:44:46.783 ===================================================== 00:44:46.783 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:44:46.783 ===================================================== 00:44:46.783 Controller Capabilities/Features 00:44:46.783 ================================ 00:44:46.783 Vendor ID: 1b36 00:44:46.783 Subsystem Vendor ID: 1af4 00:44:46.783 Serial Number: 12342 00:44:46.783 Model Number: QEMU NVMe Ctrl 00:44:46.783 Firmware Version: 8.0.0 00:44:46.783 Recommended Arb Burst: 6 00:44:46.783 IEEE OUI Identifier: 00 54 52 00:44:46.783 Multi-path I/O 00:44:46.783 May have multiple subsystem ports: No 00:44:46.783 May have multiple controllers: No 00:44:46.783 Associated with SR-IOV VF: No 00:44:46.783 Max Data Transfer Size: 524288 00:44:46.783 Max Number of Namespaces: 256 00:44:46.783 Max Number of I/O Queues: 64 00:44:46.783 NVMe Specification Version (VS): 1.4 00:44:46.783 NVMe Specification Version (Identify): 1.4 00:44:46.783 Maximum Queue Entries: 2048 00:44:46.783 Contiguous Queues Required: Yes 00:44:46.783 Arbitration Mechanisms Supported 00:44:46.783 Weighted Round Robin: Not Supported 00:44:46.783 Vendor Specific: Not Supported 00:44:46.783 Reset Timeout: 7500 ms 00:44:46.783 Doorbell Stride: 4 bytes 00:44:46.783 NVM Subsystem Reset: Not Supported 00:44:46.783 Command Sets Supported 00:44:46.783 NVM Command Set: Supported 00:44:46.783 Boot Partition: Not Supported 00:44:46.783 Memory Page Size Minimum: 4096 bytes 00:44:46.783 Memory Page Size Maximum: 65536 bytes 00:44:46.783 Persistent Memory Region: Not Supported 00:44:46.783 Optional Asynchronous Events Supported 00:44:46.783 Namespace Attribute Notices: Supported 00:44:46.783 Firmware Activation Notices: Not Supported 00:44:46.783 ANA Change Notices: Not Supported 00:44:46.783 PLE Aggregate Log Change Notices: Not Supported 00:44:46.783 LBA Status Info Alert Notices: Not Supported 00:44:46.783 EGE Aggregate Log Change Notices: Not Supported 00:44:46.783 Normal NVM Subsystem Shutdown event: Not Supported 00:44:46.783 Zone Descriptor Change Notices: Not Supported 00:44:46.783 Discovery Log Change Notices: Not Supported 00:44:46.783 Controller Attributes 00:44:46.783 128-bit Host Identifier: Not Supported 00:44:46.783 Non-Operational Permissive Mode: Not Supported 00:44:46.783 NVM Sets: Not Supported 00:44:46.783 Read Recovery Levels: Not Supported 00:44:46.783 Endurance Groups: Not Supported 00:44:46.783 Predictable Latency Mode: Not Supported 00:44:46.783 Traffic Based Keep ALive: Not Supported 00:44:46.783 Namespace Granularity: Not Supported 00:44:46.783 SQ Associations: Not Supported 00:44:46.783 UUID List: Not Supported 00:44:46.783 Multi-Domain Subsystem: Not Supported 00:44:46.783 Fixed Capacity Management: Not Supported 00:44:46.783 Variable Capacity Management: Not Supported 00:44:46.783 Delete Endurance Group: Not Supported 00:44:46.783 Delete NVM Set: Not Supported 00:44:46.783 Extended LBA Formats Supported: Supported 00:44:46.783 Flexible Data Placement Supported: Not Supported 00:44:46.783 00:44:46.783 Controller Memory Buffer Support 00:44:46.783 ================================ 00:44:46.783 Supported: No 00:44:46.783 00:44:46.783 Persistent Memory Region Support 00:44:46.783 ================================ 00:44:46.783 Supported: No 00:44:46.783 00:44:46.783 Admin Command Set Attributes 00:44:46.783 ============================ 00:44:46.783 Security Send/Receive: Not Supported 00:44:46.783 Format NVM: Supported 00:44:46.783 Firmware Activate/Download: Not Supported 00:44:46.783 Namespace Management: Supported 00:44:46.783 Device Self-Test: Not Supported 00:44:46.783 Directives: Supported 00:44:46.783 NVMe-MI: Not Supported 00:44:46.783 Virtualization Management: Not Supported 00:44:46.783 Doorbell Buffer Config: Supported 00:44:46.783 Get LBA Status Capability: Not Supported 00:44:46.783 Command & Feature Lockdown Capability: Not Supported 00:44:46.783 Abort Command Limit: 4 00:44:46.783 Async Event Request Limit: 4 00:44:46.783 Number of Firmware Slots: N/A 00:44:46.783 Firmware Slot 1 Read-Only: N/A 00:44:46.783 Firmware Activation Without Reset: N/A 00:44:46.783 Multiple Update Detection Support: N/A 00:44:46.783 Firmware Update Granularity: No Information Provided 00:44:46.783 Per-Namespace SMART Log: Yes 00:44:46.783 Asymmetric Namespace Access Log Page: Not Supported 00:44:46.783 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:44:46.783 Command Effects Log Page: Supported 00:44:46.783 Get Log Page Extended Data: Supported 00:44:46.783 Telemetry Log Pages: Not Supported 00:44:46.783 Persistent Event Log Pages: Not Supported 00:44:46.783 Supported Log Pages Log Page: May Support 00:44:46.783 Commands Supported & Effects Log Page: Not Supported 00:44:46.783 Feature Identifiers & Effects Log Page:May Support 00:44:46.783 NVMe-MI Commands & Effects Log Page: May Support 00:44:46.783 Data Area 4 for Telemetry Log: Not Supported 00:44:46.783 Error Log Page Entries Supported: 1 00:44:46.783 Keep Alive: Not Supported 00:44:46.783 00:44:46.783 NVM Command Set Attributes 00:44:46.783 ========================== 00:44:46.783 Submission Queue Entry Size 00:44:46.783 Max: 64 00:44:46.783 Min: 64 00:44:46.783 Completion Queue Entry Size 00:44:46.783 Max: 16 00:44:46.783 Min: 16 00:44:46.783 Number of Namespaces: 256 00:44:46.783 Compare Command: Supported 00:44:46.783 Write Uncorrectable Command: Not Supported 00:44:46.783 Dataset Management Command: Supported 00:44:46.783 Write Zeroes Command: Supported 00:44:46.783 Set Features Save Field: Supported 00:44:46.783 Reservations: Not Supported 00:44:46.783 Timestamp: Supported 00:44:46.783 Copy: Supported 00:44:46.783 Volatile Write Cache: Present 00:44:46.783 Atomic Write Unit (Normal): 1 00:44:46.783 Atomic Write Unit (PFail): 1 00:44:46.783 Atomic Compare & Write Unit: 1 00:44:46.783 Fused Compare & Write: Not Supported 00:44:46.783 Scatter-Gather List 00:44:46.783 SGL Command Set: Supported 00:44:46.783 SGL Keyed: Not Supported 00:44:46.783 SGL Bit Bucket Descriptor: Not Supported 00:44:46.783 SGL Metadata Pointer: Not Supported 00:44:46.783 Oversized SGL: Not Supported 00:44:46.783 SGL Metadata Address: Not Supported 00:44:46.783 SGL Offset: Not Supported 00:44:46.783 Transport SGL Data Block: Not Supported 00:44:46.783 Replay Protected Memory Block: Not Supported 00:44:46.783 00:44:46.783 Firmware Slot Information 00:44:46.783 ========================= 00:44:46.783 Active slot: 1 00:44:46.783 Slot 1 Firmware Revision: 1.0 00:44:46.783 00:44:46.783 00:44:46.783 Commands Supported and Effects 00:44:46.783 ============================== 00:44:46.783 Admin Commands 00:44:46.783 -------------- 00:44:46.783 Delete I/O Submission Queue (00h): Supported 00:44:46.783 Create I/O Submission Queue (01h): Supported 00:44:46.783 Get Log Page (02h): Supported 00:44:46.783 Delete I/O Completion Queue (04h): Supported 00:44:46.783 Create I/O Completion Queue (05h): Supported 00:44:46.783 Identify (06h): Supported 00:44:46.783 Abort (08h): Supported 00:44:46.783 Set Features (09h): Supported 00:44:46.783 Get Features (0Ah): Supported 00:44:46.783 Asynchronous Event Request (0Ch): Supported 00:44:46.783 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:46.784 Directive Send (19h): Supported 00:44:46.784 Directive Receive (1Ah): Supported 00:44:46.784 Virtualization Management (1Ch): Supported 00:44:46.784 Doorbell Buffer Config (7Ch): Supported 00:44:46.784 Format NVM (80h): Supported LBA-Change 00:44:46.784 I/O Commands 00:44:46.784 ------------ 00:44:46.784 Flush (00h): Supported LBA-Change 00:44:46.784 Write (01h): Supported LBA-Change 00:44:46.784 Read (02h): Supported 00:44:46.784 Compare (05h): Supported 00:44:46.784 Write Zeroes (08h): Supported LBA-Change 00:44:46.784 Dataset Management (09h): Supported LBA-Change 00:44:46.784 Unknown (0Ch): Supported 00:44:46.784 Unknown (12h): Supported 00:44:46.784 Copy (19h): Supported LBA-Change 00:44:46.784 Unknown (1Dh): Supported LBA-Change 00:44:46.784 00:44:46.784 Error Log 00:44:46.784 ========= 00:44:46.784 00:44:46.784 Arbitration 00:44:46.784 =========== 00:44:46.784 Arbitration Burst: no limit 00:44:46.784 00:44:46.784 Power Management 00:44:46.784 ================ 00:44:46.784 Number of Power States: 1 00:44:46.784 Current Power State: Power State #0 00:44:46.784 Power State #0: 00:44:46.784 Max Power: 25.00 W 00:44:46.784 Non-Operational State: Operational 00:44:46.784 Entry Latency: 16 microseconds 00:44:46.784 Exit Latency: 4 microseconds 00:44:46.784 Relative Read Throughput: 0 00:44:46.784 Relative Read Latency: 0 00:44:46.784 Relative Write Throughput: 0 00:44:46.784 Relative Write Latency: 0 00:44:46.784 Idle Power: Not Reported 00:44:46.784 Active Power: Not Reported 00:44:46.784 Non-Operational Permissive Mode: Not Supported 00:44:46.784 00:44:46.784 Health Information 00:44:46.784 ================== 00:44:46.784 Critical Warnings: 00:44:46.784 Available Spare Space: OK 00:44:46.784 Temperature: OK 00:44:46.784 Device Reliability: OK 00:44:46.784 Read Only: No 00:44:46.784 Volatile Memory Backup: OK 00:44:46.784 Current Temperature: 323 Kelvin (50 Celsius) 00:44:46.784 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:46.784 Available Spare: 0% 00:44:46.784 Available Spare Threshold: 0% 00:44:46.784 Life Percentage Used: 0% 00:44:46.784 Data Units Read: 2070 00:44:46.784 Data Units Written: 1858 00:44:46.784 Host Read Commands: 95753 00:44:46.784 Host Write Commands: 94064 00:44:46.784 Controller Busy Time: 0 minutes 00:44:46.784 Power Cycles: 0 00:44:46.784 Power On Hours: 0 hours 00:44:46.784 Unsafe Shutdowns: 0 00:44:46.784 Unrecoverable Media Errors: 0 00:44:46.784 Lifetime Error Log Entries: 0 00:44:46.784 Warning Temperature Time: 0 minutes 00:44:46.784 Critical Temperature Time: 0 minutes 00:44:46.784 00:44:46.784 Number of Queues 00:44:46.784 ================ 00:44:46.784 Number of I/O Submission Queues: 64 00:44:46.784 Number of I/O Completion Queues: 64 00:44:46.784 00:44:46.784 ZNS Specific Controller Data 00:44:46.784 ============================ 00:44:46.784 Zone Append Size Limit: 0 00:44:46.784 00:44:46.784 00:44:46.784 Active Namespaces 00:44:46.784 ================= 00:44:46.784 Namespace ID:1 00:44:46.784 Error Recovery Timeout: Unlimited 00:44:46.784 Command Set Identifier: NVM (00h) 00:44:46.784 Deallocate: Supported 00:44:46.784 Deallocated/Unwritten Error: Supported 00:44:46.784 Deallocated Read Value: All 0x00 00:44:46.784 Deallocate in Write Zeroes: Not Supported 00:44:46.784 Deallocated Guard Field: 0xFFFF 00:44:46.784 Flush: Supported 00:44:46.784 Reservation: Not Supported 00:44:46.784 Namespace Sharing Capabilities: Private 00:44:46.784 Size (in LBAs): 1048576 (4GiB) 00:44:46.784 Capacity (in LBAs): 1048576 (4GiB) 00:44:46.784 Utilization (in LBAs): 1048576 (4GiB) 00:44:46.784 Thin Provisioning: Not Supported 00:44:46.784 Per-NS Atomic Units: No 00:44:46.784 Maximum Single Source Range Length: 128 00:44:46.784 Maximum Copy Length: 128 00:44:46.784 Maximum Source Range Count: 128 00:44:46.784 NGUID/EUI64 Never Reused: No 00:44:46.784 Namespace Write Protected: No 00:44:46.784 Number of LBA Formats: 8 00:44:46.784 Current LBA Format: LBA Format #04 00:44:46.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:46.784 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:46.784 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:46.784 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:46.784 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:46.784 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:46.784 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:46.784 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:46.784 00:44:46.784 NVM Specific Namespace Data 00:44:46.784 =========================== 00:44:46.784 Logical Block Storage Tag Mask: 0 00:44:46.784 Protection Information Capabilities: 00:44:46.784 16b Guard Protection Information Storage Tag Support: No 00:44:46.784 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:46.784 Storage Tag Check Read Support: No 00:44:46.784 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Namespace ID:2 00:44:46.784 Error Recovery Timeout: Unlimited 00:44:46.784 Command Set Identifier: NVM (00h) 00:44:46.784 Deallocate: Supported 00:44:46.784 Deallocated/Unwritten Error: Supported 00:44:46.784 Deallocated Read Value: All 0x00 00:44:46.784 Deallocate in Write Zeroes: Not Supported 00:44:46.784 Deallocated Guard Field: 0xFFFF 00:44:46.784 Flush: Supported 00:44:46.784 Reservation: Not Supported 00:44:46.784 Namespace Sharing Capabilities: Private 00:44:46.784 Size (in LBAs): 1048576 (4GiB) 00:44:46.784 Capacity (in LBAs): 1048576 (4GiB) 00:44:46.784 Utilization (in LBAs): 1048576 (4GiB) 00:44:46.784 Thin Provisioning: Not Supported 00:44:46.784 Per-NS Atomic Units: No 00:44:46.784 Maximum Single Source Range Length: 128 00:44:46.784 Maximum Copy Length: 128 00:44:46.784 Maximum Source Range Count: 128 00:44:46.784 NGUID/EUI64 Never Reused: No 00:44:46.784 Namespace Write Protected: No 00:44:46.784 Number of LBA Formats: 8 00:44:46.784 Current LBA Format: LBA Format #04 00:44:46.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:46.784 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:46.784 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:46.784 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:46.784 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:46.784 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:46.784 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:46.784 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:46.784 00:44:46.784 NVM Specific Namespace Data 00:44:46.784 =========================== 00:44:46.784 Logical Block Storage Tag Mask: 0 00:44:46.784 Protection Information Capabilities: 00:44:46.784 16b Guard Protection Information Storage Tag Support: No 00:44:46.784 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:46.784 Storage Tag Check Read Support: No 00:44:46.784 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.784 Namespace ID:3 00:44:46.784 Error Recovery Timeout: Unlimited 00:44:46.784 Command Set Identifier: NVM (00h) 00:44:46.784 Deallocate: Supported 00:44:46.784 Deallocated/Unwritten Error: Supported 00:44:46.784 Deallocated Read Value: All 0x00 00:44:46.784 Deallocate in Write Zeroes: Not Supported 00:44:46.784 Deallocated Guard Field: 0xFFFF 00:44:46.784 Flush: Supported 00:44:46.784 Reservation: Not Supported 00:44:46.784 Namespace Sharing Capabilities: Private 00:44:46.784 Size (in LBAs): 1048576 (4GiB) 00:44:46.784 Capacity (in LBAs): 1048576 (4GiB) 00:44:46.784 Utilization (in LBAs): 1048576 (4GiB) 00:44:46.784 Thin Provisioning: Not Supported 00:44:46.784 Per-NS Atomic Units: No 00:44:46.784 Maximum Single Source Range Length: 128 00:44:46.784 Maximum Copy Length: 128 00:44:46.784 Maximum Source Range Count: 128 00:44:46.784 NGUID/EUI64 Never Reused: No 00:44:46.784 Namespace Write Protected: No 00:44:46.784 Number of LBA Formats: 8 00:44:46.784 Current LBA Format: LBA Format #04 00:44:46.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:46.784 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:46.784 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:46.785 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:46.785 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:46.785 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:46.785 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:46.785 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:46.785 00:44:46.785 NVM Specific Namespace Data 00:44:46.785 =========================== 00:44:46.785 Logical Block Storage Tag Mask: 0 00:44:46.785 Protection Information Capabilities: 00:44:46.785 16b Guard Protection Information Storage Tag Support: No 00:44:46.785 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:46.785 Storage Tag Check Read Support: No 00:44:46.785 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:46.785 09:55:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:44:46.785 09:55:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:44:47.044 ===================================================== 00:44:47.044 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:44:47.044 ===================================================== 00:44:47.044 Controller Capabilities/Features 00:44:47.044 ================================ 00:44:47.044 Vendor ID: 1b36 00:44:47.044 Subsystem Vendor ID: 1af4 00:44:47.044 Serial Number: 12343 00:44:47.044 Model Number: QEMU NVMe Ctrl 00:44:47.044 Firmware Version: 8.0.0 00:44:47.044 Recommended Arb Burst: 6 00:44:47.044 IEEE OUI Identifier: 00 54 52 00:44:47.044 Multi-path I/O 00:44:47.044 May have multiple subsystem ports: No 00:44:47.044 May have multiple controllers: Yes 00:44:47.044 Associated with SR-IOV VF: No 00:44:47.044 Max Data Transfer Size: 524288 00:44:47.044 Max Number of Namespaces: 256 00:44:47.044 Max Number of I/O Queues: 64 00:44:47.044 NVMe Specification Version (VS): 1.4 00:44:47.044 NVMe Specification Version (Identify): 1.4 00:44:47.044 Maximum Queue Entries: 2048 00:44:47.044 Contiguous Queues Required: Yes 00:44:47.044 Arbitration Mechanisms Supported 00:44:47.044 Weighted Round Robin: Not Supported 00:44:47.044 Vendor Specific: Not Supported 00:44:47.044 Reset Timeout: 7500 ms 00:44:47.044 Doorbell Stride: 4 bytes 00:44:47.044 NVM Subsystem Reset: Not Supported 00:44:47.044 Command Sets Supported 00:44:47.044 NVM Command Set: Supported 00:44:47.044 Boot Partition: Not Supported 00:44:47.044 Memory Page Size Minimum: 4096 bytes 00:44:47.044 Memory Page Size Maximum: 65536 bytes 00:44:47.044 Persistent Memory Region: Not Supported 00:44:47.044 Optional Asynchronous Events Supported 00:44:47.044 Namespace Attribute Notices: Supported 00:44:47.044 Firmware Activation Notices: Not Supported 00:44:47.044 ANA Change Notices: Not Supported 00:44:47.044 PLE Aggregate Log Change Notices: Not Supported 00:44:47.044 LBA Status Info Alert Notices: Not Supported 00:44:47.044 EGE Aggregate Log Change Notices: Not Supported 00:44:47.044 Normal NVM Subsystem Shutdown event: Not Supported 00:44:47.044 Zone Descriptor Change Notices: Not Supported 00:44:47.044 Discovery Log Change Notices: Not Supported 00:44:47.044 Controller Attributes 00:44:47.044 128-bit Host Identifier: Not Supported 00:44:47.044 Non-Operational Permissive Mode: Not Supported 00:44:47.044 NVM Sets: Not Supported 00:44:47.044 Read Recovery Levels: Not Supported 00:44:47.044 Endurance Groups: Supported 00:44:47.044 Predictable Latency Mode: Not Supported 00:44:47.044 Traffic Based Keep ALive: Not Supported 00:44:47.044 Namespace Granularity: Not Supported 00:44:47.044 SQ Associations: Not Supported 00:44:47.044 UUID List: Not Supported 00:44:47.044 Multi-Domain Subsystem: Not Supported 00:44:47.044 Fixed Capacity Management: Not Supported 00:44:47.044 Variable Capacity Management: Not Supported 00:44:47.044 Delete Endurance Group: Not Supported 00:44:47.044 Delete NVM Set: Not Supported 00:44:47.044 Extended LBA Formats Supported: Supported 00:44:47.044 Flexible Data Placement Supported: Supported 00:44:47.044 00:44:47.044 Controller Memory Buffer Support 00:44:47.044 ================================ 00:44:47.044 Supported: No 00:44:47.044 00:44:47.044 Persistent Memory Region Support 00:44:47.044 ================================ 00:44:47.044 Supported: No 00:44:47.044 00:44:47.044 Admin Command Set Attributes 00:44:47.044 ============================ 00:44:47.044 Security Send/Receive: Not Supported 00:44:47.044 Format NVM: Supported 00:44:47.044 Firmware Activate/Download: Not Supported 00:44:47.044 Namespace Management: Supported 00:44:47.044 Device Self-Test: Not Supported 00:44:47.044 Directives: Supported 00:44:47.044 NVMe-MI: Not Supported 00:44:47.044 Virtualization Management: Not Supported 00:44:47.044 Doorbell Buffer Config: Supported 00:44:47.044 Get LBA Status Capability: Not Supported 00:44:47.044 Command & Feature Lockdown Capability: Not Supported 00:44:47.044 Abort Command Limit: 4 00:44:47.044 Async Event Request Limit: 4 00:44:47.044 Number of Firmware Slots: N/A 00:44:47.044 Firmware Slot 1 Read-Only: N/A 00:44:47.044 Firmware Activation Without Reset: N/A 00:44:47.044 Multiple Update Detection Support: N/A 00:44:47.044 Firmware Update Granularity: No Information Provided 00:44:47.044 Per-Namespace SMART Log: Yes 00:44:47.044 Asymmetric Namespace Access Log Page: Not Supported 00:44:47.044 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:44:47.044 Command Effects Log Page: Supported 00:44:47.044 Get Log Page Extended Data: Supported 00:44:47.044 Telemetry Log Pages: Not Supported 00:44:47.044 Persistent Event Log Pages: Not Supported 00:44:47.044 Supported Log Pages Log Page: May Support 00:44:47.044 Commands Supported & Effects Log Page: Not Supported 00:44:47.044 Feature Identifiers & Effects Log Page:May Support 00:44:47.044 NVMe-MI Commands & Effects Log Page: May Support 00:44:47.044 Data Area 4 for Telemetry Log: Not Supported 00:44:47.044 Error Log Page Entries Supported: 1 00:44:47.044 Keep Alive: Not Supported 00:44:47.044 00:44:47.044 NVM Command Set Attributes 00:44:47.044 ========================== 00:44:47.044 Submission Queue Entry Size 00:44:47.044 Max: 64 00:44:47.044 Min: 64 00:44:47.044 Completion Queue Entry Size 00:44:47.044 Max: 16 00:44:47.044 Min: 16 00:44:47.044 Number of Namespaces: 256 00:44:47.044 Compare Command: Supported 00:44:47.044 Write Uncorrectable Command: Not Supported 00:44:47.044 Dataset Management Command: Supported 00:44:47.044 Write Zeroes Command: Supported 00:44:47.044 Set Features Save Field: Supported 00:44:47.044 Reservations: Not Supported 00:44:47.044 Timestamp: Supported 00:44:47.044 Copy: Supported 00:44:47.044 Volatile Write Cache: Present 00:44:47.044 Atomic Write Unit (Normal): 1 00:44:47.044 Atomic Write Unit (PFail): 1 00:44:47.044 Atomic Compare & Write Unit: 1 00:44:47.044 Fused Compare & Write: Not Supported 00:44:47.044 Scatter-Gather List 00:44:47.044 SGL Command Set: Supported 00:44:47.044 SGL Keyed: Not Supported 00:44:47.044 SGL Bit Bucket Descriptor: Not Supported 00:44:47.044 SGL Metadata Pointer: Not Supported 00:44:47.044 Oversized SGL: Not Supported 00:44:47.044 SGL Metadata Address: Not Supported 00:44:47.044 SGL Offset: Not Supported 00:44:47.044 Transport SGL Data Block: Not Supported 00:44:47.044 Replay Protected Memory Block: Not Supported 00:44:47.044 00:44:47.044 Firmware Slot Information 00:44:47.044 ========================= 00:44:47.044 Active slot: 1 00:44:47.044 Slot 1 Firmware Revision: 1.0 00:44:47.044 00:44:47.044 00:44:47.044 Commands Supported and Effects 00:44:47.044 ============================== 00:44:47.044 Admin Commands 00:44:47.044 -------------- 00:44:47.044 Delete I/O Submission Queue (00h): Supported 00:44:47.044 Create I/O Submission Queue (01h): Supported 00:44:47.044 Get Log Page (02h): Supported 00:44:47.044 Delete I/O Completion Queue (04h): Supported 00:44:47.044 Create I/O Completion Queue (05h): Supported 00:44:47.044 Identify (06h): Supported 00:44:47.045 Abort (08h): Supported 00:44:47.045 Set Features (09h): Supported 00:44:47.045 Get Features (0Ah): Supported 00:44:47.045 Asynchronous Event Request (0Ch): Supported 00:44:47.045 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:47.045 Directive Send (19h): Supported 00:44:47.045 Directive Receive (1Ah): Supported 00:44:47.045 Virtualization Management (1Ch): Supported 00:44:47.045 Doorbell Buffer Config (7Ch): Supported 00:44:47.045 Format NVM (80h): Supported LBA-Change 00:44:47.045 I/O Commands 00:44:47.045 ------------ 00:44:47.045 Flush (00h): Supported LBA-Change 00:44:47.045 Write (01h): Supported LBA-Change 00:44:47.045 Read (02h): Supported 00:44:47.045 Compare (05h): Supported 00:44:47.045 Write Zeroes (08h): Supported LBA-Change 00:44:47.045 Dataset Management (09h): Supported LBA-Change 00:44:47.045 Unknown (0Ch): Supported 00:44:47.045 Unknown (12h): Supported 00:44:47.045 Copy (19h): Supported LBA-Change 00:44:47.045 Unknown (1Dh): Supported LBA-Change 00:44:47.045 00:44:47.045 Error Log 00:44:47.045 ========= 00:44:47.045 00:44:47.045 Arbitration 00:44:47.045 =========== 00:44:47.045 Arbitration Burst: no limit 00:44:47.045 00:44:47.045 Power Management 00:44:47.045 ================ 00:44:47.045 Number of Power States: 1 00:44:47.045 Current Power State: Power State #0 00:44:47.045 Power State #0: 00:44:47.045 Max Power: 25.00 W 00:44:47.045 Non-Operational State: Operational 00:44:47.045 Entry Latency: 16 microseconds 00:44:47.045 Exit Latency: 4 microseconds 00:44:47.045 Relative Read Throughput: 0 00:44:47.045 Relative Read Latency: 0 00:44:47.045 Relative Write Throughput: 0 00:44:47.045 Relative Write Latency: 0 00:44:47.045 Idle Power: Not Reported 00:44:47.045 Active Power: Not Reported 00:44:47.045 Non-Operational Permissive Mode: Not Supported 00:44:47.045 00:44:47.045 Health Information 00:44:47.045 ================== 00:44:47.045 Critical Warnings: 00:44:47.045 Available Spare Space: OK 00:44:47.045 Temperature: OK 00:44:47.045 Device Reliability: OK 00:44:47.045 Read Only: No 00:44:47.045 Volatile Memory Backup: OK 00:44:47.045 Current Temperature: 323 Kelvin (50 Celsius) 00:44:47.045 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:47.045 Available Spare: 0% 00:44:47.045 Available Spare Threshold: 0% 00:44:47.045 Life Percentage Used: 0% 00:44:47.045 Data Units Read: 767 00:44:47.045 Data Units Written: 696 00:44:47.045 Host Read Commands: 32586 00:44:47.045 Host Write Commands: 32010 00:44:47.045 Controller Busy Time: 0 minutes 00:44:47.045 Power Cycles: 0 00:44:47.045 Power On Hours: 0 hours 00:44:47.045 Unsafe Shutdowns: 0 00:44:47.045 Unrecoverable Media Errors: 0 00:44:47.045 Lifetime Error Log Entries: 0 00:44:47.045 Warning Temperature Time: 0 minutes 00:44:47.045 Critical Temperature Time: 0 minutes 00:44:47.045 00:44:47.045 Number of Queues 00:44:47.045 ================ 00:44:47.045 Number of I/O Submission Queues: 64 00:44:47.045 Number of I/O Completion Queues: 64 00:44:47.045 00:44:47.045 ZNS Specific Controller Data 00:44:47.045 ============================ 00:44:47.045 Zone Append Size Limit: 0 00:44:47.045 00:44:47.045 00:44:47.045 Active Namespaces 00:44:47.045 ================= 00:44:47.045 Namespace ID:1 00:44:47.045 Error Recovery Timeout: Unlimited 00:44:47.045 Command Set Identifier: NVM (00h) 00:44:47.045 Deallocate: Supported 00:44:47.045 Deallocated/Unwritten Error: Supported 00:44:47.045 Deallocated Read Value: All 0x00 00:44:47.045 Deallocate in Write Zeroes: Not Supported 00:44:47.045 Deallocated Guard Field: 0xFFFF 00:44:47.045 Flush: Supported 00:44:47.045 Reservation: Not Supported 00:44:47.045 Namespace Sharing Capabilities: Multiple Controllers 00:44:47.045 Size (in LBAs): 262144 (1GiB) 00:44:47.045 Capacity (in LBAs): 262144 (1GiB) 00:44:47.045 Utilization (in LBAs): 262144 (1GiB) 00:44:47.045 Thin Provisioning: Not Supported 00:44:47.045 Per-NS Atomic Units: No 00:44:47.045 Maximum Single Source Range Length: 128 00:44:47.045 Maximum Copy Length: 128 00:44:47.045 Maximum Source Range Count: 128 00:44:47.045 NGUID/EUI64 Never Reused: No 00:44:47.045 Namespace Write Protected: No 00:44:47.045 Endurance group ID: 1 00:44:47.045 Number of LBA Formats: 8 00:44:47.045 Current LBA Format: LBA Format #04 00:44:47.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:47.045 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:47.045 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:47.045 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:47.045 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:47.045 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:47.045 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:47.045 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:47.045 00:44:47.045 Get Feature FDP: 00:44:47.045 ================ 00:44:47.045 Enabled: Yes 00:44:47.045 FDP configuration index: 0 00:44:47.045 00:44:47.045 FDP configurations log page 00:44:47.045 =========================== 00:44:47.045 Number of FDP configurations: 1 00:44:47.045 Version: 0 00:44:47.045 Size: 112 00:44:47.045 FDP Configuration Descriptor: 0 00:44:47.045 Descriptor Size: 96 00:44:47.045 Reclaim Group Identifier format: 2 00:44:47.045 FDP Volatile Write Cache: Not Present 00:44:47.045 FDP Configuration: Valid 00:44:47.045 Vendor Specific Size: 0 00:44:47.045 Number of Reclaim Groups: 2 00:44:47.045 Number of Recalim Unit Handles: 8 00:44:47.045 Max Placement Identifiers: 128 00:44:47.045 Number of Namespaces Suppprted: 256 00:44:47.045 Reclaim unit Nominal Size: 6000000 bytes 00:44:47.045 Estimated Reclaim Unit Time Limit: Not Reported 00:44:47.045 RUH Desc #000: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #001: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #002: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #003: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #004: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #005: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #006: RUH Type: Initially Isolated 00:44:47.045 RUH Desc #007: RUH Type: Initially Isolated 00:44:47.045 00:44:47.045 FDP reclaim unit handle usage log page 00:44:47.045 ====================================== 00:44:47.045 Number of Reclaim Unit Handles: 8 00:44:47.045 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:44:47.045 RUH Usage Desc #001: RUH Attributes: Unused 00:44:47.045 RUH Usage Desc #002: RUH Attributes: Unused 00:44:47.045 RUH Usage Desc #003: RUH Attributes: Unused 00:44:47.045 RUH Usage Desc #004: RUH Attributes: Unused 00:44:47.045 RUH Usage Desc #005: RUH Attributes: Unused 00:44:47.045 RUH Usage Desc #006: RUH Attributes: Unused 00:44:47.045 RUH Usage Desc #007: RUH Attributes: Unused 00:44:47.045 00:44:47.045 FDP statistics log page 00:44:47.045 ======================= 00:44:47.045 Host bytes with metadata written: 435658752 00:44:47.045 Media bytes with metadata written: 435724288 00:44:47.045 Media bytes erased: 0 00:44:47.045 00:44:47.045 FDP events log page 00:44:47.045 =================== 00:44:47.045 Number of FDP events: 0 00:44:47.045 00:44:47.045 NVM Specific Namespace Data 00:44:47.045 =========================== 00:44:47.045 Logical Block Storage Tag Mask: 0 00:44:47.045 Protection Information Capabilities: 00:44:47.045 16b Guard Protection Information Storage Tag Support: No 00:44:47.045 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:47.045 Storage Tag Check Read Support: No 00:44:47.045 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:47.045 ************************************ 00:44:47.045 END TEST nvme_identify 00:44:47.045 ************************************ 00:44:47.045 00:44:47.045 real 0m1.865s 00:44:47.045 user 0m0.740s 00:44:47.045 sys 0m0.896s 00:44:47.045 09:55:54 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:47.045 09:55:54 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:44:47.045 09:55:54 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:44:47.045 09:55:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:47.045 09:55:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:47.045 09:55:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:47.045 ************************************ 00:44:47.045 START TEST nvme_perf 00:44:47.045 ************************************ 00:44:47.045 09:55:54 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:44:47.045 09:55:54 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:44:48.422 Initializing NVMe Controllers 00:44:48.422 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:48.422 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:44:48.422 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:44:48.422 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:44:48.422 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:48.422 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:44:48.422 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:44:48.422 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:44:48.422 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:44:48.422 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:44:48.422 Initialization complete. Launching workers. 00:44:48.422 ======================================================== 00:44:48.422 Latency(us) 00:44:48.422 Device Information : IOPS MiB/s Average min max 00:44:48.422 PCIE (0000:00:10.0) NSID 1 from core 0: 12152.01 142.41 10546.31 8304.73 51843.29 00:44:48.422 PCIE (0000:00:11.0) NSID 1 from core 0: 12152.01 142.41 10517.02 8484.52 48775.66 00:44:48.422 PCIE (0000:00:13.0) NSID 1 from core 0: 12152.01 142.41 10485.44 8438.94 46157.40 00:44:48.422 PCIE (0000:00:12.0) NSID 1 from core 0: 12152.01 142.41 10453.13 8371.88 43040.17 00:44:48.422 PCIE (0000:00:12.0) NSID 2 from core 0: 12152.01 142.41 10418.98 8389.21 39725.81 00:44:48.422 PCIE (0000:00:12.0) NSID 3 from core 0: 12152.01 142.41 10383.55 8462.31 36128.82 00:44:48.422 ======================================================== 00:44:48.422 Total : 72912.07 854.44 10467.41 8304.73 51843.29 00:44:48.422 00:44:48.422 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:48.422 ================================================================================= 00:44:48.422 1.00000% : 8698.415us 00:44:48.422 10.00000% : 9115.462us 00:44:48.422 25.00000% : 9532.509us 00:44:48.422 50.00000% : 10128.291us 00:44:48.422 75.00000% : 10843.229us 00:44:48.422 90.00000% : 11439.011us 00:44:48.422 95.00000% : 11856.058us 00:44:48.422 98.00000% : 12451.840us 00:44:48.422 99.00000% : 39559.913us 00:44:48.422 99.50000% : 49330.735us 00:44:48.422 99.90000% : 51475.549us 00:44:48.422 99.99000% : 51952.175us 00:44:48.422 99.99900% : 51952.175us 00:44:48.422 99.99990% : 51952.175us 00:44:48.422 99.99999% : 51952.175us 00:44:48.422 00:44:48.422 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:44:48.422 ================================================================================= 00:44:48.422 1.00000% : 8757.993us 00:44:48.422 10.00000% : 9175.040us 00:44:48.422 25.00000% : 9532.509us 00:44:48.422 50.00000% : 10128.291us 00:44:48.422 75.00000% : 10843.229us 00:44:48.422 90.00000% : 11439.011us 00:44:48.422 95.00000% : 11796.480us 00:44:48.422 98.00000% : 12273.105us 00:44:48.422 99.00000% : 37415.098us 00:44:48.422 99.50000% : 46232.669us 00:44:48.422 99.90000% : 48377.484us 00:44:48.422 99.99000% : 48854.109us 00:44:48.422 99.99900% : 48854.109us 00:44:48.422 99.99990% : 48854.109us 00:44:48.422 99.99999% : 48854.109us 00:44:48.422 00:44:48.422 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:44:48.422 ================================================================================= 00:44:48.422 1.00000% : 8757.993us 00:44:48.422 10.00000% : 9175.040us 00:44:48.422 25.00000% : 9532.509us 00:44:48.422 50.00000% : 10128.291us 00:44:48.422 75.00000% : 10843.229us 00:44:48.422 90.00000% : 11379.433us 00:44:48.422 95.00000% : 11736.902us 00:44:48.422 98.00000% : 12273.105us 00:44:48.422 99.00000% : 35031.971us 00:44:48.422 99.50000% : 43849.542us 00:44:48.422 99.90000% : 45756.044us 00:44:48.422 99.99000% : 46232.669us 00:44:48.422 99.99900% : 46232.669us 00:44:48.422 99.99990% : 46232.669us 00:44:48.422 99.99999% : 46232.669us 00:44:48.422 00:44:48.422 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:44:48.422 ================================================================================= 00:44:48.422 1.00000% : 8757.993us 00:44:48.422 10.00000% : 9175.040us 00:44:48.422 25.00000% : 9532.509us 00:44:48.422 50.00000% : 10128.291us 00:44:48.422 75.00000% : 10843.229us 00:44:48.422 90.00000% : 11379.433us 00:44:48.422 95.00000% : 11736.902us 00:44:48.422 98.00000% : 12213.527us 00:44:48.422 99.00000% : 32410.531us 00:44:48.422 99.50000% : 40751.476us 00:44:48.422 99.90000% : 42657.978us 00:44:48.422 99.99000% : 43134.604us 00:44:48.422 99.99900% : 43134.604us 00:44:48.422 99.99990% : 43134.604us 00:44:48.422 99.99999% : 43134.604us 00:44:48.422 00:44:48.422 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:44:48.422 ================================================================================= 00:44:48.422 1.00000% : 8757.993us 00:44:48.422 10.00000% : 9175.040us 00:44:48.422 25.00000% : 9532.509us 00:44:48.422 50.00000% : 10128.291us 00:44:48.422 75.00000% : 10843.229us 00:44:48.422 90.00000% : 11379.433us 00:44:48.422 95.00000% : 11736.902us 00:44:48.422 98.00000% : 12153.949us 00:44:48.422 99.00000% : 29431.622us 00:44:48.422 99.50000% : 37415.098us 00:44:48.422 99.90000% : 39321.600us 00:44:48.422 99.99000% : 39798.225us 00:44:48.422 99.99900% : 39798.225us 00:44:48.422 99.99990% : 39798.225us 00:44:48.422 99.99999% : 39798.225us 00:44:48.422 00:44:48.422 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:44:48.422 ================================================================================= 00:44:48.422 1.00000% : 8757.993us 00:44:48.422 10.00000% : 9175.040us 00:44:48.422 25.00000% : 9532.509us 00:44:48.422 50.00000% : 10128.291us 00:44:48.422 75.00000% : 10843.229us 00:44:48.422 90.00000% : 11379.433us 00:44:48.422 95.00000% : 11736.902us 00:44:48.422 98.00000% : 12094.371us 00:44:48.422 99.00000% : 26214.400us 00:44:48.422 99.50000% : 33840.407us 00:44:48.422 99.90000% : 35746.909us 00:44:48.422 99.99000% : 36223.535us 00:44:48.422 99.99900% : 36223.535us 00:44:48.422 99.99990% : 36223.535us 00:44:48.422 99.99999% : 36223.535us 00:44:48.422 00:44:48.422 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:48.422 ============================================================================== 00:44:48.422 Range in us Cumulative IO count 00:44:48.422 8281.367 - 8340.945: 0.0245% ( 3) 00:44:48.422 8340.945 - 8400.524: 0.0982% ( 9) 00:44:48.422 8400.524 - 8460.102: 0.2209% ( 15) 00:44:48.422 8460.102 - 8519.680: 0.3518% ( 16) 00:44:48.422 8519.680 - 8579.258: 0.5563% ( 25) 00:44:48.422 8579.258 - 8638.836: 0.7853% ( 28) 00:44:48.422 8638.836 - 8698.415: 1.2516% ( 57) 00:44:48.422 8698.415 - 8757.993: 2.1515% ( 110) 00:44:48.422 8757.993 - 8817.571: 3.1823% ( 126) 00:44:48.422 8817.571 - 8877.149: 4.3766% ( 146) 00:44:48.422 8877.149 - 8936.727: 5.7019% ( 162) 00:44:48.422 8936.727 - 8996.305: 7.3217% ( 198) 00:44:48.422 8996.305 - 9055.884: 9.0314% ( 209) 00:44:48.422 9055.884 - 9115.462: 10.7248% ( 207) 00:44:48.422 9115.462 - 9175.040: 12.6063% ( 230) 00:44:48.422 9175.040 - 9234.618: 14.5370% ( 236) 00:44:48.422 9234.618 - 9294.196: 16.4676% ( 236) 00:44:48.422 9294.196 - 9353.775: 18.6355% ( 265) 00:44:48.422 9353.775 - 9413.353: 20.8279% ( 268) 00:44:48.422 9413.353 - 9472.931: 23.1266% ( 281) 00:44:48.422 9472.931 - 9532.509: 25.4745% ( 287) 00:44:48.422 9532.509 - 9592.087: 28.0105% ( 310) 00:44:48.422 9592.087 - 9651.665: 30.6283% ( 320) 00:44:48.422 9651.665 - 9711.244: 33.4506% ( 345) 00:44:48.422 9711.244 - 9770.822: 36.3547% ( 355) 00:44:48.422 9770.822 - 9830.400: 39.3079% ( 361) 00:44:48.422 9830.400 - 9889.978: 42.2120% ( 355) 00:44:48.422 9889.978 - 9949.556: 44.8707% ( 325) 00:44:48.422 9949.556 - 10009.135: 47.5376% ( 326) 00:44:48.422 10009.135 - 10068.713: 49.8937% ( 288) 00:44:48.422 10068.713 - 10128.291: 52.1842% ( 280) 00:44:48.422 10128.291 - 10187.869: 54.4666% ( 279) 00:44:48.422 10187.869 - 10247.447: 56.4054% ( 237) 00:44:48.422 10247.447 - 10307.025: 58.5488% ( 262) 00:44:48.422 10307.025 - 10366.604: 60.6430% ( 256) 00:44:48.422 10366.604 - 10426.182: 62.7209% ( 254) 00:44:48.422 10426.182 - 10485.760: 64.7251% ( 245) 00:44:48.422 10485.760 - 10545.338: 66.6885% ( 240) 00:44:48.423 10545.338 - 10604.916: 68.6600% ( 241) 00:44:48.423 10604.916 - 10664.495: 70.5088% ( 226) 00:44:48.423 10664.495 - 10724.073: 72.3904% ( 230) 00:44:48.423 10724.073 - 10783.651: 74.4437% ( 251) 00:44:48.423 10783.651 - 10843.229: 76.3334% ( 231) 00:44:48.423 10843.229 - 10902.807: 78.2232% ( 231) 00:44:48.423 10902.807 - 10962.385: 79.8184% ( 195) 00:44:48.423 10962.385 - 11021.964: 81.6263% ( 221) 00:44:48.423 11021.964 - 11081.542: 83.2379% ( 197) 00:44:48.423 11081.542 - 11141.120: 84.6204% ( 169) 00:44:48.423 11141.120 - 11200.698: 85.9784% ( 166) 00:44:48.423 11200.698 - 11260.276: 87.1891% ( 148) 00:44:48.423 11260.276 - 11319.855: 88.2608% ( 131) 00:44:48.423 11319.855 - 11379.433: 89.2098% ( 116) 00:44:48.423 11379.433 - 11439.011: 90.1342% ( 113) 00:44:48.423 11439.011 - 11498.589: 90.8541% ( 88) 00:44:48.423 11498.589 - 11558.167: 91.8439% ( 121) 00:44:48.423 11558.167 - 11617.745: 92.4656% ( 76) 00:44:48.423 11617.745 - 11677.324: 93.2673% ( 98) 00:44:48.423 11677.324 - 11736.902: 93.9300% ( 81) 00:44:48.423 11736.902 - 11796.480: 94.6662% ( 90) 00:44:48.423 11796.480 - 11856.058: 95.3125% ( 79) 00:44:48.423 11856.058 - 11915.636: 95.8197% ( 62) 00:44:48.423 11915.636 - 11975.215: 96.2696% ( 55) 00:44:48.423 11975.215 - 12034.793: 96.6541% ( 47) 00:44:48.423 12034.793 - 12094.371: 96.9732% ( 39) 00:44:48.423 12094.371 - 12153.949: 97.2922% ( 39) 00:44:48.423 12153.949 - 12213.527: 97.4722% ( 22) 00:44:48.423 12213.527 - 12273.105: 97.6440% ( 21) 00:44:48.423 12273.105 - 12332.684: 97.8649% ( 27) 00:44:48.423 12332.684 - 12392.262: 97.9467% ( 10) 00:44:48.423 12392.262 - 12451.840: 98.1266% ( 22) 00:44:48.423 12451.840 - 12511.418: 98.2084% ( 10) 00:44:48.423 12511.418 - 12570.996: 98.3230% ( 14) 00:44:48.423 12570.996 - 12630.575: 98.4048% ( 10) 00:44:48.423 12630.575 - 12690.153: 98.4375% ( 4) 00:44:48.423 12690.153 - 12749.731: 98.4784% ( 5) 00:44:48.423 12749.731 - 12809.309: 98.5029% ( 3) 00:44:48.423 12809.309 - 12868.887: 98.5275% ( 3) 00:44:48.423 12868.887 - 12928.465: 98.5602% ( 4) 00:44:48.423 12928.465 - 12988.044: 98.6011% ( 5) 00:44:48.423 12988.044 - 13047.622: 98.6338% ( 4) 00:44:48.423 13047.622 - 13107.200: 98.6502% ( 2) 00:44:48.423 13107.200 - 13166.778: 98.6584% ( 1) 00:44:48.423 13166.778 - 13226.356: 98.6747% ( 2) 00:44:48.423 13226.356 - 13285.935: 98.6829% ( 1) 00:44:48.423 13285.935 - 13345.513: 98.6993% ( 2) 00:44:48.423 13345.513 - 13405.091: 98.7156% ( 2) 00:44:48.423 13464.669 - 13524.247: 98.7484% ( 4) 00:44:48.423 13583.825 - 13643.404: 98.7647% ( 2) 00:44:48.423 13643.404 - 13702.982: 98.7729% ( 1) 00:44:48.423 13702.982 - 13762.560: 98.7974% ( 3) 00:44:48.423 13762.560 - 13822.138: 98.8056% ( 1) 00:44:48.423 13822.138 - 13881.716: 98.8138% ( 1) 00:44:48.423 13881.716 - 13941.295: 98.8302% ( 2) 00:44:48.423 13941.295 - 14000.873: 98.8384% ( 1) 00:44:48.423 14000.873 - 14060.451: 98.8547% ( 2) 00:44:48.423 14060.451 - 14120.029: 98.8711% ( 2) 00:44:48.423 14120.029 - 14179.607: 98.8793% ( 1) 00:44:48.423 14179.607 - 14239.185: 98.8956% ( 2) 00:44:48.423 14239.185 - 14298.764: 98.9120% ( 2) 00:44:48.423 14298.764 - 14358.342: 98.9283% ( 2) 00:44:48.423 14417.920 - 14477.498: 98.9529% ( 3) 00:44:48.423 39083.287 - 39321.600: 98.9692% ( 2) 00:44:48.423 39321.600 - 39559.913: 99.0020% ( 4) 00:44:48.423 39559.913 - 39798.225: 99.0347% ( 4) 00:44:48.423 39798.225 - 40036.538: 99.0756% ( 5) 00:44:48.423 40036.538 - 40274.851: 99.1083% ( 4) 00:44:48.423 40274.851 - 40513.164: 99.1492% ( 5) 00:44:48.423 40513.164 - 40751.476: 99.1819% ( 4) 00:44:48.423 40751.476 - 40989.789: 99.2310% ( 6) 00:44:48.423 40989.789 - 41228.102: 99.2637% ( 4) 00:44:48.423 41228.102 - 41466.415: 99.2965% ( 4) 00:44:48.423 41466.415 - 41704.727: 99.3455% ( 6) 00:44:48.423 41704.727 - 41943.040: 99.3701% ( 3) 00:44:48.423 41943.040 - 42181.353: 99.4028% ( 4) 00:44:48.423 42181.353 - 42419.665: 99.4519% ( 6) 00:44:48.423 42419.665 - 42657.978: 99.4764% ( 3) 00:44:48.423 49092.422 - 49330.735: 99.5092% ( 4) 00:44:48.423 49330.735 - 49569.047: 99.5501% ( 5) 00:44:48.423 49569.047 - 49807.360: 99.6073% ( 7) 00:44:48.423 49807.360 - 50045.673: 99.6401% ( 4) 00:44:48.423 50045.673 - 50283.985: 99.6973% ( 7) 00:44:48.423 50283.985 - 50522.298: 99.7464% ( 6) 00:44:48.423 50522.298 - 50760.611: 99.7955% ( 6) 00:44:48.423 50760.611 - 50998.924: 99.8364% ( 5) 00:44:48.423 50998.924 - 51237.236: 99.8773% ( 5) 00:44:48.423 51237.236 - 51475.549: 99.9346% ( 7) 00:44:48.423 51475.549 - 51713.862: 99.9836% ( 6) 00:44:48.423 51713.862 - 51952.175: 100.0000% ( 2) 00:44:48.423 00:44:48.423 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:44:48.423 ============================================================================== 00:44:48.423 Range in us Cumulative IO count 00:44:48.423 8460.102 - 8519.680: 0.0736% ( 9) 00:44:48.423 8519.680 - 8579.258: 0.2618% ( 23) 00:44:48.423 8579.258 - 8638.836: 0.4827% ( 27) 00:44:48.423 8638.836 - 8698.415: 0.8017% ( 39) 00:44:48.423 8698.415 - 8757.993: 1.2026% ( 49) 00:44:48.423 8757.993 - 8817.571: 1.7916% ( 72) 00:44:48.423 8817.571 - 8877.149: 2.7405% ( 116) 00:44:48.423 8877.149 - 8936.727: 4.0494% ( 160) 00:44:48.423 8936.727 - 8996.305: 5.5301% ( 181) 00:44:48.423 8996.305 - 9055.884: 7.3707% ( 225) 00:44:48.423 9055.884 - 9115.462: 9.3586% ( 243) 00:44:48.423 9115.462 - 9175.040: 11.4856% ( 260) 00:44:48.423 9175.040 - 9234.618: 13.5553% ( 253) 00:44:48.423 9234.618 - 9294.196: 15.9686% ( 295) 00:44:48.423 9294.196 - 9353.775: 18.3573% ( 292) 00:44:48.423 9353.775 - 9413.353: 20.6970% ( 286) 00:44:48.423 9413.353 - 9472.931: 23.1757% ( 303) 00:44:48.423 9472.931 - 9532.509: 25.5972% ( 296) 00:44:48.423 9532.509 - 9592.087: 28.1577% ( 313) 00:44:48.423 9592.087 - 9651.665: 30.8410% ( 328) 00:44:48.423 9651.665 - 9711.244: 33.5897% ( 336) 00:44:48.423 9711.244 - 9770.822: 36.3465% ( 337) 00:44:48.423 9770.822 - 9830.400: 39.1198% ( 339) 00:44:48.423 9830.400 - 9889.978: 41.7539% ( 322) 00:44:48.423 9889.978 - 9949.556: 44.2981% ( 311) 00:44:48.423 9949.556 - 10009.135: 46.7932% ( 305) 00:44:48.423 10009.135 - 10068.713: 49.1901% ( 293) 00:44:48.423 10068.713 - 10128.291: 51.4562% ( 277) 00:44:48.423 10128.291 - 10187.869: 53.8367% ( 291) 00:44:48.423 10187.869 - 10247.447: 55.9800% ( 262) 00:44:48.423 10247.447 - 10307.025: 58.1397% ( 264) 00:44:48.423 10307.025 - 10366.604: 60.3076% ( 265) 00:44:48.423 10366.604 - 10426.182: 62.5082% ( 269) 00:44:48.423 10426.182 - 10485.760: 64.5861% ( 254) 00:44:48.423 10485.760 - 10545.338: 66.5985% ( 246) 00:44:48.423 10545.338 - 10604.916: 68.6518% ( 251) 00:44:48.423 10604.916 - 10664.495: 70.7134% ( 252) 00:44:48.423 10664.495 - 10724.073: 72.7912% ( 254) 00:44:48.423 10724.073 - 10783.651: 74.8118% ( 247) 00:44:48.423 10783.651 - 10843.229: 76.8734% ( 252) 00:44:48.423 10843.229 - 10902.807: 78.8776% ( 245) 00:44:48.423 10902.807 - 10962.385: 80.6692% ( 219) 00:44:48.423 10962.385 - 11021.964: 82.3707% ( 208) 00:44:48.423 11021.964 - 11081.542: 83.9414% ( 192) 00:44:48.423 11081.542 - 11141.120: 85.2994% ( 166) 00:44:48.423 11141.120 - 11200.698: 86.6083% ( 160) 00:44:48.423 11200.698 - 11260.276: 87.6718% ( 130) 00:44:48.423 11260.276 - 11319.855: 88.7189% ( 128) 00:44:48.423 11319.855 - 11379.433: 89.7660% ( 128) 00:44:48.423 11379.433 - 11439.011: 90.8132% ( 128) 00:44:48.423 11439.011 - 11498.589: 91.7294% ( 112) 00:44:48.423 11498.589 - 11558.167: 92.5802% ( 104) 00:44:48.423 11558.167 - 11617.745: 93.3655% ( 96) 00:44:48.423 11617.745 - 11677.324: 94.1509% ( 96) 00:44:48.423 11677.324 - 11736.902: 94.9035% ( 92) 00:44:48.423 11736.902 - 11796.480: 95.4761% ( 70) 00:44:48.423 11796.480 - 11856.058: 95.9424% ( 57) 00:44:48.423 11856.058 - 11915.636: 96.3351% ( 48) 00:44:48.423 11915.636 - 11975.215: 96.6868% ( 43) 00:44:48.423 11975.215 - 12034.793: 97.0141% ( 40) 00:44:48.423 12034.793 - 12094.371: 97.3413% ( 40) 00:44:48.423 12094.371 - 12153.949: 97.6276% ( 35) 00:44:48.423 12153.949 - 12213.527: 97.8485% ( 27) 00:44:48.423 12213.527 - 12273.105: 98.0530% ( 25) 00:44:48.423 12273.105 - 12332.684: 98.1921% ( 17) 00:44:48.423 12332.684 - 12392.262: 98.3066% ( 14) 00:44:48.423 12392.262 - 12451.840: 98.3802% ( 9) 00:44:48.423 12451.840 - 12511.418: 98.4539% ( 9) 00:44:48.423 12511.418 - 12570.996: 98.5193% ( 8) 00:44:48.423 12570.996 - 12630.575: 98.5766% ( 7) 00:44:48.423 12630.575 - 12690.153: 98.6175% ( 5) 00:44:48.423 12690.153 - 12749.731: 98.6502% ( 4) 00:44:48.423 12749.731 - 12809.309: 98.6911% ( 5) 00:44:48.423 12809.309 - 12868.887: 98.7320% ( 5) 00:44:48.423 12868.887 - 12928.465: 98.7484% ( 2) 00:44:48.423 12928.465 - 12988.044: 98.7565% ( 1) 00:44:48.423 12988.044 - 13047.622: 98.7729% ( 2) 00:44:48.423 13047.622 - 13107.200: 98.7893% ( 2) 00:44:48.423 13107.200 - 13166.778: 98.8056% ( 2) 00:44:48.423 13166.778 - 13226.356: 98.8220% ( 2) 00:44:48.423 13226.356 - 13285.935: 98.8384% ( 2) 00:44:48.423 13285.935 - 13345.513: 98.8547% ( 2) 00:44:48.423 13345.513 - 13405.091: 98.8629% ( 1) 00:44:48.423 13405.091 - 13464.669: 98.8793% ( 2) 00:44:48.423 13464.669 - 13524.247: 98.8874% ( 1) 00:44:48.423 13524.247 - 13583.825: 98.9038% ( 2) 00:44:48.423 13583.825 - 13643.404: 98.9202% ( 2) 00:44:48.423 13643.404 - 13702.982: 98.9365% ( 2) 00:44:48.423 13702.982 - 13762.560: 98.9447% ( 1) 00:44:48.423 13762.560 - 13822.138: 98.9529% ( 1) 00:44:48.423 36938.473 - 37176.785: 98.9856% ( 4) 00:44:48.423 37176.785 - 37415.098: 99.0265% ( 5) 00:44:48.423 37415.098 - 37653.411: 99.0592% ( 4) 00:44:48.423 37653.411 - 37891.724: 99.1001% ( 5) 00:44:48.423 37891.724 - 38130.036: 99.1329% ( 4) 00:44:48.423 38130.036 - 38368.349: 99.1738% ( 5) 00:44:48.423 38368.349 - 38606.662: 99.2065% ( 4) 00:44:48.423 38606.662 - 38844.975: 99.2474% ( 5) 00:44:48.423 38844.975 - 39083.287: 99.2801% ( 4) 00:44:48.423 39083.287 - 39321.600: 99.3128% ( 4) 00:44:48.423 39321.600 - 39559.913: 99.3455% ( 4) 00:44:48.424 39559.913 - 39798.225: 99.3865% ( 5) 00:44:48.424 39798.225 - 40036.538: 99.4274% ( 5) 00:44:48.424 40036.538 - 40274.851: 99.4683% ( 5) 00:44:48.424 40274.851 - 40513.164: 99.4764% ( 1) 00:44:48.424 45994.356 - 46232.669: 99.5010% ( 3) 00:44:48.424 46232.669 - 46470.982: 99.5419% ( 5) 00:44:48.424 46470.982 - 46709.295: 99.5991% ( 7) 00:44:48.424 46709.295 - 46947.607: 99.6401% ( 5) 00:44:48.424 46947.607 - 47185.920: 99.6973% ( 7) 00:44:48.424 47185.920 - 47424.233: 99.7464% ( 6) 00:44:48.424 47424.233 - 47662.545: 99.7873% ( 5) 00:44:48.424 47662.545 - 47900.858: 99.8200% ( 4) 00:44:48.424 47900.858 - 48139.171: 99.8691% ( 6) 00:44:48.424 48139.171 - 48377.484: 99.9100% ( 5) 00:44:48.424 48377.484 - 48615.796: 99.9591% ( 6) 00:44:48.424 48615.796 - 48854.109: 100.0000% ( 5) 00:44:48.424 00:44:48.424 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:44:48.424 ============================================================================== 00:44:48.424 Range in us Cumulative IO count 00:44:48.424 8400.524 - 8460.102: 0.0164% ( 2) 00:44:48.424 8460.102 - 8519.680: 0.1063% ( 11) 00:44:48.424 8519.680 - 8579.258: 0.2781% ( 21) 00:44:48.424 8579.258 - 8638.836: 0.5072% ( 28) 00:44:48.424 8638.836 - 8698.415: 0.8344% ( 40) 00:44:48.424 8698.415 - 8757.993: 1.3580% ( 64) 00:44:48.424 8757.993 - 8817.571: 2.0288% ( 82) 00:44:48.424 8817.571 - 8877.149: 3.0023% ( 119) 00:44:48.424 8877.149 - 8936.727: 4.2294% ( 150) 00:44:48.424 8936.727 - 8996.305: 5.6446% ( 173) 00:44:48.424 8996.305 - 9055.884: 7.2971% ( 202) 00:44:48.424 9055.884 - 9115.462: 9.1050% ( 221) 00:44:48.424 9115.462 - 9175.040: 11.2484% ( 262) 00:44:48.424 9175.040 - 9234.618: 13.5308% ( 279) 00:44:48.424 9234.618 - 9294.196: 15.8541% ( 284) 00:44:48.424 9294.196 - 9353.775: 18.2101% ( 288) 00:44:48.424 9353.775 - 9413.353: 20.7624% ( 312) 00:44:48.424 9413.353 - 9472.931: 23.2739% ( 307) 00:44:48.424 9472.931 - 9532.509: 25.9817% ( 331) 00:44:48.424 9532.509 - 9592.087: 28.6322% ( 324) 00:44:48.424 9592.087 - 9651.665: 31.2909% ( 325) 00:44:48.424 9651.665 - 9711.244: 34.0151% ( 333) 00:44:48.424 9711.244 - 9770.822: 36.7228% ( 331) 00:44:48.424 9770.822 - 9830.400: 39.3243% ( 318) 00:44:48.424 9830.400 - 9889.978: 41.9421% ( 320) 00:44:48.424 9889.978 - 9949.556: 44.4781% ( 310) 00:44:48.424 9949.556 - 10009.135: 46.9404% ( 301) 00:44:48.424 10009.135 - 10068.713: 49.4110% ( 302) 00:44:48.424 10068.713 - 10128.291: 51.8488% ( 298) 00:44:48.424 10128.291 - 10187.869: 54.1639% ( 283) 00:44:48.424 10187.869 - 10247.447: 56.3809% ( 271) 00:44:48.424 10247.447 - 10307.025: 58.4833% ( 257) 00:44:48.424 10307.025 - 10366.604: 60.5203% ( 249) 00:44:48.424 10366.604 - 10426.182: 62.5736% ( 251) 00:44:48.424 10426.182 - 10485.760: 64.7170% ( 262) 00:44:48.424 10485.760 - 10545.338: 66.6803% ( 240) 00:44:48.424 10545.338 - 10604.916: 68.6273% ( 238) 00:44:48.424 10604.916 - 10664.495: 70.6561% ( 248) 00:44:48.424 10664.495 - 10724.073: 72.6767% ( 247) 00:44:48.424 10724.073 - 10783.651: 74.6973% ( 247) 00:44:48.424 10783.651 - 10843.229: 76.8488% ( 263) 00:44:48.424 10843.229 - 10902.807: 78.6895% ( 225) 00:44:48.424 10902.807 - 10962.385: 80.5383% ( 226) 00:44:48.424 10962.385 - 11021.964: 82.2317% ( 207) 00:44:48.424 11021.964 - 11081.542: 83.7615% ( 187) 00:44:48.424 11081.542 - 11141.120: 85.1849% ( 174) 00:44:48.424 11141.120 - 11200.698: 86.5347% ( 165) 00:44:48.424 11200.698 - 11260.276: 87.7536% ( 149) 00:44:48.424 11260.276 - 11319.855: 88.9071% ( 141) 00:44:48.424 11319.855 - 11379.433: 90.0115% ( 135) 00:44:48.424 11379.433 - 11439.011: 91.0586% ( 128) 00:44:48.424 11439.011 - 11498.589: 91.9912% ( 114) 00:44:48.424 11498.589 - 11558.167: 92.9483% ( 117) 00:44:48.424 11558.167 - 11617.745: 93.7173% ( 94) 00:44:48.424 11617.745 - 11677.324: 94.3554% ( 78) 00:44:48.424 11677.324 - 11736.902: 95.0180% ( 81) 00:44:48.424 11736.902 - 11796.480: 95.6234% ( 74) 00:44:48.424 11796.480 - 11856.058: 96.0978% ( 58) 00:44:48.424 11856.058 - 11915.636: 96.4823% ( 47) 00:44:48.424 11915.636 - 11975.215: 96.8423% ( 44) 00:44:48.424 11975.215 - 12034.793: 97.1613% ( 39) 00:44:48.424 12034.793 - 12094.371: 97.4476% ( 35) 00:44:48.424 12094.371 - 12153.949: 97.7012% ( 31) 00:44:48.424 12153.949 - 12213.527: 97.9139% ( 26) 00:44:48.424 12213.527 - 12273.105: 98.1266% ( 26) 00:44:48.424 12273.105 - 12332.684: 98.3066% ( 22) 00:44:48.424 12332.684 - 12392.262: 98.4457% ( 17) 00:44:48.424 12392.262 - 12451.840: 98.5602% ( 14) 00:44:48.424 12451.840 - 12511.418: 98.6584% ( 12) 00:44:48.424 12511.418 - 12570.996: 98.7156% ( 7) 00:44:48.424 12570.996 - 12630.575: 98.7402% ( 3) 00:44:48.424 12630.575 - 12690.153: 98.7565% ( 2) 00:44:48.424 12690.153 - 12749.731: 98.7729% ( 2) 00:44:48.424 12749.731 - 12809.309: 98.7893% ( 2) 00:44:48.424 12809.309 - 12868.887: 98.8056% ( 2) 00:44:48.424 12868.887 - 12928.465: 98.8138% ( 1) 00:44:48.424 12928.465 - 12988.044: 98.8302% ( 2) 00:44:48.424 12988.044 - 13047.622: 98.8465% ( 2) 00:44:48.424 13047.622 - 13107.200: 98.8629% ( 2) 00:44:48.424 13107.200 - 13166.778: 98.8793% ( 2) 00:44:48.424 13166.778 - 13226.356: 98.8874% ( 1) 00:44:48.424 13226.356 - 13285.935: 98.9038% ( 2) 00:44:48.424 13285.935 - 13345.513: 98.9202% ( 2) 00:44:48.424 13345.513 - 13405.091: 98.9365% ( 2) 00:44:48.424 13405.091 - 13464.669: 98.9529% ( 2) 00:44:48.424 34555.345 - 34793.658: 98.9938% ( 5) 00:44:48.424 34793.658 - 35031.971: 99.0265% ( 4) 00:44:48.424 35031.971 - 35270.284: 99.0592% ( 4) 00:44:48.424 35270.284 - 35508.596: 99.1001% ( 5) 00:44:48.424 35508.596 - 35746.909: 99.1329% ( 4) 00:44:48.424 35746.909 - 35985.222: 99.1656% ( 4) 00:44:48.424 35985.222 - 36223.535: 99.2065% ( 5) 00:44:48.424 36223.535 - 36461.847: 99.2392% ( 4) 00:44:48.424 36461.847 - 36700.160: 99.2801% ( 5) 00:44:48.424 36700.160 - 36938.473: 99.3210% ( 5) 00:44:48.424 36938.473 - 37176.785: 99.3537% ( 4) 00:44:48.424 37176.785 - 37415.098: 99.3946% ( 5) 00:44:48.424 37415.098 - 37653.411: 99.4274% ( 4) 00:44:48.424 37653.411 - 37891.724: 99.4683% ( 5) 00:44:48.424 37891.724 - 38130.036: 99.4764% ( 1) 00:44:48.424 43372.916 - 43611.229: 99.4846% ( 1) 00:44:48.424 43611.229 - 43849.542: 99.5255% ( 5) 00:44:48.424 43849.542 - 44087.855: 99.5746% ( 6) 00:44:48.424 44087.855 - 44326.167: 99.6155% ( 5) 00:44:48.424 44326.167 - 44564.480: 99.6646% ( 6) 00:44:48.424 44564.480 - 44802.793: 99.7219% ( 7) 00:44:48.424 44802.793 - 45041.105: 99.7628% ( 5) 00:44:48.424 45041.105 - 45279.418: 99.8118% ( 6) 00:44:48.424 45279.418 - 45517.731: 99.8609% ( 6) 00:44:48.424 45517.731 - 45756.044: 99.9100% ( 6) 00:44:48.424 45756.044 - 45994.356: 99.9591% ( 6) 00:44:48.424 45994.356 - 46232.669: 100.0000% ( 5) 00:44:48.424 00:44:48.424 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:44:48.424 ============================================================================== 00:44:48.424 Range in us Cumulative IO count 00:44:48.424 8340.945 - 8400.524: 0.0164% ( 2) 00:44:48.424 8400.524 - 8460.102: 0.0491% ( 4) 00:44:48.424 8460.102 - 8519.680: 0.0982% ( 6) 00:44:48.424 8519.680 - 8579.258: 0.2536% ( 19) 00:44:48.424 8579.258 - 8638.836: 0.4581% ( 25) 00:44:48.424 8638.836 - 8698.415: 0.7690% ( 38) 00:44:48.424 8698.415 - 8757.993: 1.1453% ( 46) 00:44:48.424 8757.993 - 8817.571: 1.9306% ( 96) 00:44:48.424 8817.571 - 8877.149: 2.7569% ( 101) 00:44:48.424 8877.149 - 8936.727: 3.9431% ( 145) 00:44:48.424 8936.727 - 8996.305: 5.3092% ( 167) 00:44:48.424 8996.305 - 9055.884: 7.0108% ( 208) 00:44:48.424 9055.884 - 9115.462: 8.9005% ( 231) 00:44:48.424 9115.462 - 9175.040: 11.0438% ( 262) 00:44:48.424 9175.040 - 9234.618: 13.1872% ( 262) 00:44:48.424 9234.618 - 9294.196: 15.4941% ( 282) 00:44:48.424 9294.196 - 9353.775: 17.9074% ( 295) 00:44:48.424 9353.775 - 9413.353: 20.4434% ( 310) 00:44:48.424 9413.353 - 9472.931: 23.1921% ( 336) 00:44:48.424 9472.931 - 9532.509: 25.9490% ( 337) 00:44:48.424 9532.509 - 9592.087: 28.6404% ( 329) 00:44:48.424 9592.087 - 9651.665: 31.3563% ( 332) 00:44:48.424 9651.665 - 9711.244: 34.0560% ( 330) 00:44:48.424 9711.244 - 9770.822: 36.6738% ( 320) 00:44:48.424 9770.822 - 9830.400: 39.2752% ( 318) 00:44:48.424 9830.400 - 9889.978: 41.8930% ( 320) 00:44:48.424 9889.978 - 9949.556: 44.5272% ( 322) 00:44:48.424 9949.556 - 10009.135: 47.0877% ( 313) 00:44:48.424 10009.135 - 10068.713: 49.5173% ( 297) 00:44:48.424 10068.713 - 10128.291: 51.9961% ( 303) 00:44:48.424 10128.291 - 10187.869: 54.1885% ( 268) 00:44:48.424 10187.869 - 10247.447: 56.3973% ( 270) 00:44:48.424 10247.447 - 10307.025: 58.5406% ( 262) 00:44:48.424 10307.025 - 10366.604: 60.7248% ( 267) 00:44:48.424 10366.604 - 10426.182: 62.6636% ( 237) 00:44:48.424 10426.182 - 10485.760: 64.5942% ( 236) 00:44:48.424 10485.760 - 10545.338: 66.6394% ( 250) 00:44:48.424 10545.338 - 10604.916: 68.6109% ( 241) 00:44:48.424 10604.916 - 10664.495: 70.6561% ( 250) 00:44:48.424 10664.495 - 10724.073: 72.7667% ( 258) 00:44:48.424 10724.073 - 10783.651: 74.8446% ( 254) 00:44:48.424 10783.651 - 10843.229: 76.9388% ( 256) 00:44:48.424 10843.229 - 10902.807: 78.9103% ( 241) 00:44:48.424 10902.807 - 10962.385: 80.8164% ( 233) 00:44:48.425 10962.385 - 11021.964: 82.5425% ( 211) 00:44:48.425 11021.964 - 11081.542: 84.1705% ( 199) 00:44:48.425 11081.542 - 11141.120: 85.4794% ( 160) 00:44:48.425 11141.120 - 11200.698: 86.7310% ( 153) 00:44:48.425 11200.698 - 11260.276: 87.9172% ( 145) 00:44:48.425 11260.276 - 11319.855: 89.1361% ( 149) 00:44:48.425 11319.855 - 11379.433: 90.2241% ( 133) 00:44:48.425 11379.433 - 11439.011: 91.2958% ( 131) 00:44:48.425 11439.011 - 11498.589: 92.1711% ( 107) 00:44:48.425 11498.589 - 11558.167: 93.0219% ( 104) 00:44:48.425 11558.167 - 11617.745: 93.7500% ( 89) 00:44:48.425 11617.745 - 11677.324: 94.3963% ( 79) 00:44:48.425 11677.324 - 11736.902: 95.0262% ( 77) 00:44:48.425 11736.902 - 11796.480: 95.6152% ( 72) 00:44:48.425 11796.480 - 11856.058: 96.1715% ( 68) 00:44:48.425 11856.058 - 11915.636: 96.6050% ( 53) 00:44:48.425 11915.636 - 11975.215: 96.9977% ( 48) 00:44:48.425 11975.215 - 12034.793: 97.3495% ( 43) 00:44:48.425 12034.793 - 12094.371: 97.6522% ( 37) 00:44:48.425 12094.371 - 12153.949: 97.9303% ( 34) 00:44:48.425 12153.949 - 12213.527: 98.1839% ( 31) 00:44:48.425 12213.527 - 12273.105: 98.3475% ( 20) 00:44:48.425 12273.105 - 12332.684: 98.5357% ( 23) 00:44:48.425 12332.684 - 12392.262: 98.6829% ( 18) 00:44:48.425 12392.262 - 12451.840: 98.7893% ( 13) 00:44:48.425 12451.840 - 12511.418: 98.8629% ( 9) 00:44:48.425 12511.418 - 12570.996: 98.9038% ( 5) 00:44:48.425 12570.996 - 12630.575: 98.9447% ( 5) 00:44:48.425 12630.575 - 12690.153: 98.9529% ( 1) 00:44:48.425 31933.905 - 32172.218: 98.9774% ( 3) 00:44:48.425 32172.218 - 32410.531: 99.0101% ( 4) 00:44:48.425 32410.531 - 32648.844: 99.0429% ( 4) 00:44:48.425 32648.844 - 32887.156: 99.0838% ( 5) 00:44:48.425 32887.156 - 33125.469: 99.1247% ( 5) 00:44:48.425 33125.469 - 33363.782: 99.1574% ( 4) 00:44:48.425 33363.782 - 33602.095: 99.1983% ( 5) 00:44:48.425 33602.095 - 33840.407: 99.2392% ( 5) 00:44:48.425 33840.407 - 34078.720: 99.2719% ( 4) 00:44:48.425 34078.720 - 34317.033: 99.3046% ( 4) 00:44:48.425 34317.033 - 34555.345: 99.3374% ( 4) 00:44:48.425 34555.345 - 34793.658: 99.3783% ( 5) 00:44:48.425 34793.658 - 35031.971: 99.4192% ( 5) 00:44:48.425 35031.971 - 35270.284: 99.4519% ( 4) 00:44:48.425 35270.284 - 35508.596: 99.4764% ( 3) 00:44:48.425 40274.851 - 40513.164: 99.4928% ( 2) 00:44:48.425 40513.164 - 40751.476: 99.5337% ( 5) 00:44:48.425 40751.476 - 40989.789: 99.5828% ( 6) 00:44:48.425 40989.789 - 41228.102: 99.6237% ( 5) 00:44:48.425 41228.102 - 41466.415: 99.6728% ( 6) 00:44:48.425 41466.415 - 41704.727: 99.7219% ( 6) 00:44:48.425 41704.727 - 41943.040: 99.7628% ( 5) 00:44:48.425 41943.040 - 42181.353: 99.8200% ( 7) 00:44:48.425 42181.353 - 42419.665: 99.8691% ( 6) 00:44:48.425 42419.665 - 42657.978: 99.9100% ( 5) 00:44:48.425 42657.978 - 42896.291: 99.9591% ( 6) 00:44:48.425 42896.291 - 43134.604: 100.0000% ( 5) 00:44:48.425 00:44:48.425 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:44:48.425 ============================================================================== 00:44:48.425 Range in us Cumulative IO count 00:44:48.425 8340.945 - 8400.524: 0.0164% ( 2) 00:44:48.425 8400.524 - 8460.102: 0.0654% ( 6) 00:44:48.425 8460.102 - 8519.680: 0.1063% ( 5) 00:44:48.425 8519.680 - 8579.258: 0.2291% ( 15) 00:44:48.425 8579.258 - 8638.836: 0.4254% ( 24) 00:44:48.425 8638.836 - 8698.415: 0.7035% ( 34) 00:44:48.425 8698.415 - 8757.993: 1.1616% ( 56) 00:44:48.425 8757.993 - 8817.571: 1.7916% ( 77) 00:44:48.425 8817.571 - 8877.149: 2.7569% ( 118) 00:44:48.425 8877.149 - 8936.727: 3.9431% ( 145) 00:44:48.425 8936.727 - 8996.305: 5.3665% ( 174) 00:44:48.425 8996.305 - 9055.884: 6.9617% ( 195) 00:44:48.425 9055.884 - 9115.462: 8.7696% ( 221) 00:44:48.425 9115.462 - 9175.040: 10.7984% ( 248) 00:44:48.425 9175.040 - 9234.618: 12.9336% ( 261) 00:44:48.425 9234.618 - 9294.196: 15.1587% ( 272) 00:44:48.425 9294.196 - 9353.775: 17.6538% ( 305) 00:44:48.425 9353.775 - 9413.353: 20.1652% ( 307) 00:44:48.425 9413.353 - 9472.931: 22.8158% ( 324) 00:44:48.425 9472.931 - 9532.509: 25.5890% ( 339) 00:44:48.425 9532.509 - 9592.087: 28.3868% ( 342) 00:44:48.425 9592.087 - 9651.665: 31.0782% ( 329) 00:44:48.425 9651.665 - 9711.244: 33.7124% ( 322) 00:44:48.425 9711.244 - 9770.822: 36.5101% ( 342) 00:44:48.425 9770.822 - 9830.400: 39.0707% ( 313) 00:44:48.425 9830.400 - 9889.978: 41.6230% ( 312) 00:44:48.425 9889.978 - 9949.556: 44.1836% ( 313) 00:44:48.425 9949.556 - 10009.135: 46.7441% ( 313) 00:44:48.425 10009.135 - 10068.713: 49.2556% ( 307) 00:44:48.425 10068.713 - 10128.291: 51.7261% ( 302) 00:44:48.425 10128.291 - 10187.869: 54.0330% ( 282) 00:44:48.425 10187.869 - 10247.447: 56.3236% ( 280) 00:44:48.425 10247.447 - 10307.025: 58.6060% ( 279) 00:44:48.425 10307.025 - 10366.604: 60.6839% ( 254) 00:44:48.425 10366.604 - 10426.182: 62.8109% ( 260) 00:44:48.425 10426.182 - 10485.760: 64.8969% ( 255) 00:44:48.425 10485.760 - 10545.338: 66.8603% ( 240) 00:44:48.425 10545.338 - 10604.916: 68.7991% ( 237) 00:44:48.425 10604.916 - 10664.495: 70.7624% ( 240) 00:44:48.425 10664.495 - 10724.073: 72.8321% ( 253) 00:44:48.425 10724.073 - 10783.651: 74.8937% ( 252) 00:44:48.425 10783.651 - 10843.229: 77.1024% ( 270) 00:44:48.425 10843.229 - 10902.807: 79.1476% ( 250) 00:44:48.425 10902.807 - 10962.385: 80.9637% ( 222) 00:44:48.425 10962.385 - 11021.964: 82.6571% ( 207) 00:44:48.425 11021.964 - 11081.542: 84.3505% ( 207) 00:44:48.425 11081.542 - 11141.120: 85.8312% ( 181) 00:44:48.425 11141.120 - 11200.698: 87.1728% ( 164) 00:44:48.425 11200.698 - 11260.276: 88.3426% ( 143) 00:44:48.425 11260.276 - 11319.855: 89.3570% ( 124) 00:44:48.425 11319.855 - 11379.433: 90.4450% ( 133) 00:44:48.425 11379.433 - 11439.011: 91.3776% ( 114) 00:44:48.425 11439.011 - 11498.589: 92.2775% ( 110) 00:44:48.425 11498.589 - 11558.167: 93.1365% ( 105) 00:44:48.425 11558.167 - 11617.745: 93.9136% ( 95) 00:44:48.425 11617.745 - 11677.324: 94.6499% ( 90) 00:44:48.425 11677.324 - 11736.902: 95.3616% ( 87) 00:44:48.425 11736.902 - 11796.480: 95.9751% ( 75) 00:44:48.425 11796.480 - 11856.058: 96.5314% ( 68) 00:44:48.425 11856.058 - 11915.636: 97.0304% ( 61) 00:44:48.425 11915.636 - 11975.215: 97.4967% ( 57) 00:44:48.425 11975.215 - 12034.793: 97.7749% ( 34) 00:44:48.425 12034.793 - 12094.371: 97.9712% ( 24) 00:44:48.425 12094.371 - 12153.949: 98.1430% ( 21) 00:44:48.425 12153.949 - 12213.527: 98.3393% ( 24) 00:44:48.425 12213.527 - 12273.105: 98.5111% ( 21) 00:44:48.425 12273.105 - 12332.684: 98.6257% ( 14) 00:44:48.425 12332.684 - 12392.262: 98.6911% ( 8) 00:44:48.425 12392.262 - 12451.840: 98.7647% ( 9) 00:44:48.425 12451.840 - 12511.418: 98.8220% ( 7) 00:44:48.425 12511.418 - 12570.996: 98.8711% ( 6) 00:44:48.425 12570.996 - 12630.575: 98.9120% ( 5) 00:44:48.425 12630.575 - 12690.153: 98.9365% ( 3) 00:44:48.425 12690.153 - 12749.731: 98.9529% ( 2) 00:44:48.425 29074.153 - 29193.309: 98.9774% ( 3) 00:44:48.425 29193.309 - 29312.465: 98.9938% ( 2) 00:44:48.425 29312.465 - 29431.622: 99.0183% ( 3) 00:44:48.425 29431.622 - 29550.778: 99.0347% ( 2) 00:44:48.425 29550.778 - 29669.935: 99.0592% ( 3) 00:44:48.425 29669.935 - 29789.091: 99.0756% ( 2) 00:44:48.425 29789.091 - 29908.247: 99.1001% ( 3) 00:44:48.425 29908.247 - 30027.404: 99.1165% ( 2) 00:44:48.425 30027.404 - 30146.560: 99.1329% ( 2) 00:44:48.425 30146.560 - 30265.716: 99.1492% ( 2) 00:44:48.425 30265.716 - 30384.873: 99.1656% ( 2) 00:44:48.425 30384.873 - 30504.029: 99.1819% ( 2) 00:44:48.425 30504.029 - 30742.342: 99.2147% ( 4) 00:44:48.425 30742.342 - 30980.655: 99.2392% ( 3) 00:44:48.425 30980.655 - 31218.967: 99.2801% ( 5) 00:44:48.425 31218.967 - 31457.280: 99.3046% ( 3) 00:44:48.425 31457.280 - 31695.593: 99.3455% ( 5) 00:44:48.425 31695.593 - 31933.905: 99.3946% ( 6) 00:44:48.425 31933.905 - 32172.218: 99.4355% ( 5) 00:44:48.425 32172.218 - 32410.531: 99.4764% ( 5) 00:44:48.425 36938.473 - 37176.785: 99.4928% ( 2) 00:44:48.425 37176.785 - 37415.098: 99.5173% ( 3) 00:44:48.425 37415.098 - 37653.411: 99.5664% ( 6) 00:44:48.425 37653.411 - 37891.724: 99.6073% ( 5) 00:44:48.425 37891.724 - 38130.036: 99.6564% ( 6) 00:44:48.425 38130.036 - 38368.349: 99.7137% ( 7) 00:44:48.425 38368.349 - 38606.662: 99.7546% ( 5) 00:44:48.425 38606.662 - 38844.975: 99.8037% ( 6) 00:44:48.425 38844.975 - 39083.287: 99.8527% ( 6) 00:44:48.425 39083.287 - 39321.600: 99.9018% ( 6) 00:44:48.425 39321.600 - 39559.913: 99.9591% ( 7) 00:44:48.425 39559.913 - 39798.225: 100.0000% ( 5) 00:44:48.425 00:44:48.425 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:44:48.425 ============================================================================== 00:44:48.425 Range in us Cumulative IO count 00:44:48.425 8460.102 - 8519.680: 0.1391% ( 17) 00:44:48.425 8519.680 - 8579.258: 0.2863% ( 18) 00:44:48.425 8579.258 - 8638.836: 0.4336% ( 18) 00:44:48.425 8638.836 - 8698.415: 0.7199% ( 35) 00:44:48.425 8698.415 - 8757.993: 1.1289% ( 50) 00:44:48.425 8757.993 - 8817.571: 1.7343% ( 74) 00:44:48.425 8817.571 - 8877.149: 2.6423% ( 111) 00:44:48.425 8877.149 - 8936.727: 4.0249% ( 169) 00:44:48.425 8936.727 - 8996.305: 5.4810% ( 178) 00:44:48.425 8996.305 - 9055.884: 7.1826% ( 208) 00:44:48.425 9055.884 - 9115.462: 9.0478% ( 228) 00:44:48.425 9115.462 - 9175.040: 11.1093% ( 252) 00:44:48.425 9175.040 - 9234.618: 13.2199% ( 258) 00:44:48.425 9234.618 - 9294.196: 15.3714% ( 263) 00:44:48.425 9294.196 - 9353.775: 17.7438% ( 290) 00:44:48.425 9353.775 - 9413.353: 20.1489% ( 294) 00:44:48.425 9413.353 - 9472.931: 22.7421% ( 317) 00:44:48.425 9472.931 - 9532.509: 25.4499% ( 331) 00:44:48.425 9532.509 - 9592.087: 28.2559% ( 343) 00:44:48.425 9592.087 - 9651.665: 30.9964% ( 335) 00:44:48.426 9651.665 - 9711.244: 33.7287% ( 334) 00:44:48.426 9711.244 - 9770.822: 36.6083% ( 352) 00:44:48.426 9770.822 - 9830.400: 39.2752% ( 326) 00:44:48.426 9830.400 - 9889.978: 41.7212% ( 299) 00:44:48.426 9889.978 - 9949.556: 44.1590% ( 298) 00:44:48.426 9949.556 - 10009.135: 46.7196% ( 313) 00:44:48.426 10009.135 - 10068.713: 49.0756% ( 288) 00:44:48.426 10068.713 - 10128.291: 51.3907% ( 283) 00:44:48.426 10128.291 - 10187.869: 53.7631% ( 290) 00:44:48.426 10187.869 - 10247.447: 56.2336% ( 302) 00:44:48.426 10247.447 - 10307.025: 58.4751% ( 274) 00:44:48.426 10307.025 - 10366.604: 60.6757% ( 269) 00:44:48.426 10366.604 - 10426.182: 62.7454% ( 253) 00:44:48.426 10426.182 - 10485.760: 64.8151% ( 253) 00:44:48.426 10485.760 - 10545.338: 66.8112% ( 244) 00:44:48.426 10545.338 - 10604.916: 68.7255% ( 234) 00:44:48.426 10604.916 - 10664.495: 70.7134% ( 243) 00:44:48.426 10664.495 - 10724.073: 72.7421% ( 248) 00:44:48.426 10724.073 - 10783.651: 74.7300% ( 243) 00:44:48.426 10783.651 - 10843.229: 76.7752% ( 250) 00:44:48.426 10843.229 - 10902.807: 78.7467% ( 241) 00:44:48.426 10902.807 - 10962.385: 80.6692% ( 235) 00:44:48.426 10962.385 - 11021.964: 82.4935% ( 223) 00:44:48.426 11021.964 - 11081.542: 84.1623% ( 204) 00:44:48.426 11081.542 - 11141.120: 85.6839% ( 186) 00:44:48.426 11141.120 - 11200.698: 86.9683% ( 157) 00:44:48.426 11200.698 - 11260.276: 88.0808% ( 136) 00:44:48.426 11260.276 - 11319.855: 89.1688% ( 133) 00:44:48.426 11319.855 - 11379.433: 90.2405% ( 131) 00:44:48.426 11379.433 - 11439.011: 91.1976% ( 117) 00:44:48.426 11439.011 - 11498.589: 92.1630% ( 118) 00:44:48.426 11498.589 - 11558.167: 93.0546% ( 109) 00:44:48.426 11558.167 - 11617.745: 93.8973% ( 103) 00:44:48.426 11617.745 - 11677.324: 94.6990% ( 98) 00:44:48.426 11677.324 - 11736.902: 95.4598% ( 93) 00:44:48.426 11736.902 - 11796.480: 96.0978% ( 78) 00:44:48.426 11796.480 - 11856.058: 96.6623% ( 69) 00:44:48.426 11856.058 - 11915.636: 97.2022% ( 66) 00:44:48.426 11915.636 - 11975.215: 97.6276% ( 52) 00:44:48.426 11975.215 - 12034.793: 97.9139% ( 35) 00:44:48.426 12034.793 - 12094.371: 98.1512% ( 29) 00:44:48.426 12094.371 - 12153.949: 98.3066% ( 19) 00:44:48.426 12153.949 - 12213.527: 98.4539% ( 18) 00:44:48.426 12213.527 - 12273.105: 98.5602% ( 13) 00:44:48.426 12273.105 - 12332.684: 98.6584% ( 12) 00:44:48.426 12332.684 - 12392.262: 98.7320% ( 9) 00:44:48.426 12392.262 - 12451.840: 98.7811% ( 6) 00:44:48.426 12451.840 - 12511.418: 98.8056% ( 3) 00:44:48.426 12511.418 - 12570.996: 98.8220% ( 2) 00:44:48.426 12570.996 - 12630.575: 98.8465% ( 3) 00:44:48.426 12630.575 - 12690.153: 98.8711% ( 3) 00:44:48.426 12690.153 - 12749.731: 98.8956% ( 3) 00:44:48.426 12749.731 - 12809.309: 98.9202% ( 3) 00:44:48.426 12809.309 - 12868.887: 98.9365% ( 2) 00:44:48.426 12868.887 - 12928.465: 98.9529% ( 2) 00:44:48.426 25856.931 - 25976.087: 98.9692% ( 2) 00:44:48.426 25976.087 - 26095.244: 98.9856% ( 2) 00:44:48.426 26095.244 - 26214.400: 99.0101% ( 3) 00:44:48.426 26214.400 - 26333.556: 99.0347% ( 3) 00:44:48.426 26333.556 - 26452.713: 99.0592% ( 3) 00:44:48.426 26452.713 - 26571.869: 99.0838% ( 3) 00:44:48.426 26571.869 - 26691.025: 99.1083% ( 3) 00:44:48.426 26691.025 - 26810.182: 99.1247% ( 2) 00:44:48.426 26810.182 - 26929.338: 99.1410% ( 2) 00:44:48.426 26929.338 - 27048.495: 99.1574% ( 2) 00:44:48.426 27048.495 - 27167.651: 99.1738% ( 2) 00:44:48.426 27167.651 - 27286.807: 99.1901% ( 2) 00:44:48.426 27286.807 - 27405.964: 99.2147% ( 3) 00:44:48.426 27405.964 - 27525.120: 99.2310% ( 2) 00:44:48.426 27525.120 - 27644.276: 99.2556% ( 3) 00:44:48.426 27644.276 - 27763.433: 99.2801% ( 3) 00:44:48.426 27763.433 - 27882.589: 99.3128% ( 4) 00:44:48.426 27882.589 - 28001.745: 99.3292% ( 2) 00:44:48.426 28001.745 - 28120.902: 99.3619% ( 4) 00:44:48.426 28120.902 - 28240.058: 99.3783% ( 2) 00:44:48.426 28240.058 - 28359.215: 99.4028% ( 3) 00:44:48.426 28359.215 - 28478.371: 99.4274% ( 3) 00:44:48.426 28478.371 - 28597.527: 99.4519% ( 3) 00:44:48.426 28597.527 - 28716.684: 99.4764% ( 3) 00:44:48.426 33602.095 - 33840.407: 99.5255% ( 6) 00:44:48.426 33840.407 - 34078.720: 99.5746% ( 6) 00:44:48.426 34078.720 - 34317.033: 99.6237% ( 6) 00:44:48.426 34317.033 - 34555.345: 99.6728% ( 6) 00:44:48.426 34555.345 - 34793.658: 99.7219% ( 6) 00:44:48.426 34793.658 - 35031.971: 99.7709% ( 6) 00:44:48.426 35031.971 - 35270.284: 99.8200% ( 6) 00:44:48.426 35270.284 - 35508.596: 99.8691% ( 6) 00:44:48.426 35508.596 - 35746.909: 99.9182% ( 6) 00:44:48.426 35746.909 - 35985.222: 99.9673% ( 6) 00:44:48.426 35985.222 - 36223.535: 100.0000% ( 4) 00:44:48.426 00:44:48.426 09:55:55 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:44:49.803 Initializing NVMe Controllers 00:44:49.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:49.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:44:49.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:44:49.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:44:49.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:49.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:44:49.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:44:49.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:44:49.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:44:49.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:44:49.803 Initialization complete. Launching workers. 00:44:49.803 ======================================================== 00:44:49.803 Latency(us) 00:44:49.803 Device Information : IOPS MiB/s Average min max 00:44:49.803 PCIE (0000:00:10.0) NSID 1 from core 0: 10592.05 124.13 12112.06 9845.62 46123.20 00:44:49.803 PCIE (0000:00:11.0) NSID 1 from core 0: 10592.05 124.13 12077.94 10169.35 42699.98 00:44:49.803 PCIE (0000:00:13.0) NSID 1 from core 0: 10592.05 124.13 12043.56 10118.47 40326.56 00:44:49.803 PCIE (0000:00:12.0) NSID 1 from core 0: 10592.05 124.13 12010.01 10004.10 37010.58 00:44:49.803 PCIE (0000:00:12.0) NSID 2 from core 0: 10592.05 124.13 11976.77 10224.12 34008.80 00:44:49.803 PCIE (0000:00:12.0) NSID 3 from core 0: 10592.05 124.13 11942.02 10117.89 30702.55 00:44:49.803 ======================================================== 00:44:49.803 Total : 63552.33 744.75 12027.06 9845.62 46123.20 00:44:49.803 00:44:49.803 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:49.803 ================================================================================= 00:44:49.803 1.00000% : 10247.447us 00:44:49.803 10.00000% : 10724.073us 00:44:49.803 25.00000% : 11141.120us 00:44:49.803 50.00000% : 11677.324us 00:44:49.803 75.00000% : 12332.684us 00:44:49.803 90.00000% : 13107.200us 00:44:49.803 95.00000% : 13702.982us 00:44:49.803 98.00000% : 14537.076us 00:44:49.803 99.00000% : 35031.971us 00:44:49.803 99.50000% : 43849.542us 00:44:49.803 99.90000% : 45756.044us 00:44:49.803 99.99000% : 46232.669us 00:44:49.803 99.99900% : 46232.669us 00:44:49.803 99.99990% : 46232.669us 00:44:49.803 99.99999% : 46232.669us 00:44:49.803 00:44:49.803 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:44:49.803 ================================================================================= 00:44:49.803 1.00000% : 10485.760us 00:44:49.803 10.00000% : 10843.229us 00:44:49.803 25.00000% : 11200.698us 00:44:49.803 50.00000% : 11677.324us 00:44:49.803 75.00000% : 12273.105us 00:44:49.803 90.00000% : 13047.622us 00:44:49.803 95.00000% : 13524.247us 00:44:49.803 98.00000% : 14656.233us 00:44:49.803 99.00000% : 32887.156us 00:44:49.803 99.50000% : 40751.476us 00:44:49.803 99.90000% : 42419.665us 00:44:49.803 99.99000% : 42896.291us 00:44:49.803 99.99900% : 42896.291us 00:44:49.803 99.99990% : 42896.291us 00:44:49.803 99.99999% : 42896.291us 00:44:49.803 00:44:49.803 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:44:49.803 ================================================================================= 00:44:49.803 1.00000% : 10485.760us 00:44:49.803 10.00000% : 10902.807us 00:44:49.803 25.00000% : 11200.698us 00:44:49.803 50.00000% : 11677.324us 00:44:49.803 75.00000% : 12213.527us 00:44:49.803 90.00000% : 12988.044us 00:44:49.803 95.00000% : 13524.247us 00:44:49.803 98.00000% : 14715.811us 00:44:49.803 99.00000% : 30384.873us 00:44:49.803 99.50000% : 38130.036us 00:44:49.803 99.90000% : 40036.538us 00:44:49.803 99.99000% : 40513.164us 00:44:49.803 99.99900% : 40513.164us 00:44:49.803 99.99990% : 40513.164us 00:44:49.803 99.99999% : 40513.164us 00:44:49.803 00:44:49.803 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:44:49.803 ================================================================================= 00:44:49.803 1.00000% : 10485.760us 00:44:49.803 10.00000% : 10902.807us 00:44:49.803 25.00000% : 11200.698us 00:44:49.803 50.00000% : 11677.324us 00:44:49.803 75.00000% : 12213.527us 00:44:49.803 90.00000% : 12988.044us 00:44:49.803 95.00000% : 13702.982us 00:44:49.803 98.00000% : 14537.076us 00:44:49.803 99.00000% : 27286.807us 00:44:49.803 99.50000% : 35031.971us 00:44:49.803 99.90000% : 36700.160us 00:44:49.803 99.99000% : 37176.785us 00:44:49.803 99.99900% : 37176.785us 00:44:49.803 99.99990% : 37176.785us 00:44:49.803 99.99999% : 37176.785us 00:44:49.803 00:44:49.803 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:44:49.803 ================================================================================= 00:44:49.803 1.00000% : 10485.760us 00:44:49.803 10.00000% : 10902.807us 00:44:49.803 25.00000% : 11200.698us 00:44:49.803 50.00000% : 11677.324us 00:44:49.803 75.00000% : 12273.105us 00:44:49.803 90.00000% : 12988.044us 00:44:49.803 95.00000% : 13762.560us 00:44:49.803 98.00000% : 14656.233us 00:44:49.803 99.00000% : 24188.742us 00:44:49.803 99.50000% : 31933.905us 00:44:49.803 99.90000% : 33602.095us 00:44:49.804 99.99000% : 34078.720us 00:44:49.804 99.99900% : 34078.720us 00:44:49.804 99.99990% : 34078.720us 00:44:49.804 99.99999% : 34078.720us 00:44:49.804 00:44:49.804 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:44:49.804 ================================================================================= 00:44:49.804 1.00000% : 10485.760us 00:44:49.804 10.00000% : 10843.229us 00:44:49.804 25.00000% : 11200.698us 00:44:49.804 50.00000% : 11677.324us 00:44:49.804 75.00000% : 12273.105us 00:44:49.804 90.00000% : 13047.622us 00:44:49.804 95.00000% : 13643.404us 00:44:49.804 98.00000% : 14715.811us 00:44:49.804 99.00000% : 21328.989us 00:44:49.804 99.50000% : 26571.869us 00:44:49.804 99.90000% : 30265.716us 00:44:49.804 99.99000% : 30742.342us 00:44:49.804 99.99900% : 30742.342us 00:44:49.804 99.99990% : 30742.342us 00:44:49.804 99.99999% : 30742.342us 00:44:49.804 00:44:49.804 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:49.804 ============================================================================== 00:44:49.804 Range in us Cumulative IO count 00:44:49.804 9830.400 - 9889.978: 0.0094% ( 1) 00:44:49.804 9889.978 - 9949.556: 0.0377% ( 3) 00:44:49.804 9949.556 - 10009.135: 0.0753% ( 4) 00:44:49.804 10009.135 - 10068.713: 0.2165% ( 15) 00:44:49.804 10068.713 - 10128.291: 0.5742% ( 38) 00:44:49.804 10128.291 - 10187.869: 0.9319% ( 38) 00:44:49.804 10187.869 - 10247.447: 1.1672% ( 25) 00:44:49.804 10247.447 - 10307.025: 1.6660% ( 53) 00:44:49.804 10307.025 - 10366.604: 2.1931% ( 56) 00:44:49.804 10366.604 - 10426.182: 3.0779% ( 94) 00:44:49.804 10426.182 - 10485.760: 4.0945% ( 108) 00:44:49.804 10485.760 - 10545.338: 5.6005% ( 160) 00:44:49.804 10545.338 - 10604.916: 7.2477% ( 175) 00:44:49.804 10604.916 - 10664.495: 8.6502% ( 149) 00:44:49.804 10664.495 - 10724.073: 10.7304% ( 221) 00:44:49.804 10724.073 - 10783.651: 12.8389% ( 224) 00:44:49.804 10783.651 - 10843.229: 14.8908% ( 218) 00:44:49.804 10843.229 - 10902.807: 16.8957% ( 213) 00:44:49.804 10902.807 - 10962.385: 19.3053% ( 256) 00:44:49.804 10962.385 - 11021.964: 21.2632% ( 208) 00:44:49.804 11021.964 - 11081.542: 23.8705% ( 277) 00:44:49.804 11081.542 - 11141.120: 26.7225% ( 303) 00:44:49.804 11141.120 - 11200.698: 29.1227% ( 255) 00:44:49.804 11200.698 - 11260.276: 31.5512% ( 258) 00:44:49.804 11260.276 - 11319.855: 34.1867% ( 280) 00:44:49.804 11319.855 - 11379.433: 37.0953% ( 309) 00:44:49.804 11379.433 - 11439.011: 39.6555% ( 272) 00:44:49.804 11439.011 - 11498.589: 42.2628% ( 277) 00:44:49.804 11498.589 - 11558.167: 44.7007% ( 259) 00:44:49.804 11558.167 - 11617.745: 47.3739% ( 284) 00:44:49.804 11617.745 - 11677.324: 50.2918% ( 310) 00:44:49.804 11677.324 - 11736.902: 52.8897% ( 276) 00:44:49.804 11736.902 - 11796.480: 55.3464% ( 261) 00:44:49.804 11796.480 - 11856.058: 57.8972% ( 271) 00:44:49.804 11856.058 - 11915.636: 60.6645% ( 294) 00:44:49.804 11915.636 - 11975.215: 63.2059% ( 270) 00:44:49.804 11975.215 - 12034.793: 65.6438% ( 259) 00:44:49.804 12034.793 - 12094.371: 67.7993% ( 229) 00:44:49.804 12094.371 - 12153.949: 70.1242% ( 247) 00:44:49.804 12153.949 - 12213.527: 72.1762% ( 218) 00:44:49.804 12213.527 - 12273.105: 73.6822% ( 160) 00:44:49.804 12273.105 - 12332.684: 75.3389% ( 176) 00:44:49.804 12332.684 - 12392.262: 76.7225% ( 147) 00:44:49.804 12392.262 - 12451.840: 78.0591% ( 142) 00:44:49.804 12451.840 - 12511.418: 79.4334% ( 146) 00:44:49.804 12511.418 - 12570.996: 80.7323% ( 138) 00:44:49.804 12570.996 - 12630.575: 82.2195% ( 158) 00:44:49.804 12630.575 - 12690.153: 83.4714% ( 133) 00:44:49.804 12690.153 - 12749.731: 84.5727% ( 117) 00:44:49.804 12749.731 - 12809.309: 85.8057% ( 131) 00:44:49.804 12809.309 - 12868.887: 86.9352% ( 120) 00:44:49.804 12868.887 - 12928.465: 87.8106% ( 93) 00:44:49.804 12928.465 - 12988.044: 88.8460% ( 110) 00:44:49.804 12988.044 - 13047.622: 89.7026% ( 91) 00:44:49.804 13047.622 - 13107.200: 90.4367% ( 78) 00:44:49.804 13107.200 - 13166.778: 91.0392% ( 64) 00:44:49.804 13166.778 - 13226.356: 91.7075% ( 71) 00:44:49.804 13226.356 - 13285.935: 92.2816% ( 61) 00:44:49.804 13285.935 - 13345.513: 92.7805% ( 53) 00:44:49.804 13345.513 - 13405.091: 93.1852% ( 43) 00:44:49.804 13405.091 - 13464.669: 93.5806% ( 42) 00:44:49.804 13464.669 - 13524.247: 93.9759% ( 42) 00:44:49.804 13524.247 - 13583.825: 94.4465% ( 50) 00:44:49.804 13583.825 - 13643.404: 94.7948% ( 37) 00:44:49.804 13643.404 - 13702.982: 95.3219% ( 56) 00:44:49.804 13702.982 - 13762.560: 95.7549% ( 46) 00:44:49.804 13762.560 - 13822.138: 96.1126% ( 38) 00:44:49.804 13822.138 - 13881.716: 96.4232% ( 33) 00:44:49.804 13881.716 - 13941.295: 96.6773% ( 27) 00:44:49.804 13941.295 - 14000.873: 96.9221% ( 26) 00:44:49.804 14000.873 - 14060.451: 97.0821% ( 17) 00:44:49.804 14060.451 - 14120.029: 97.2044% ( 13) 00:44:49.804 14120.029 - 14179.607: 97.2797% ( 8) 00:44:49.804 14179.607 - 14239.185: 97.3833% ( 11) 00:44:49.804 14239.185 - 14298.764: 97.6562% ( 29) 00:44:49.804 14298.764 - 14358.342: 97.7786% ( 13) 00:44:49.804 14358.342 - 14417.920: 97.8633% ( 9) 00:44:49.804 14417.920 - 14477.498: 97.9292% ( 7) 00:44:49.804 14477.498 - 14537.076: 98.0516% ( 13) 00:44:49.804 14537.076 - 14596.655: 98.1645% ( 12) 00:44:49.804 14596.655 - 14656.233: 98.2492% ( 9) 00:44:49.804 14656.233 - 14715.811: 98.3057% ( 6) 00:44:49.804 14715.811 - 14775.389: 98.3528% ( 5) 00:44:49.804 14775.389 - 14834.967: 98.4093% ( 6) 00:44:49.804 14834.967 - 14894.545: 98.4469% ( 4) 00:44:49.804 14894.545 - 14954.124: 98.4657% ( 2) 00:44:49.804 14954.124 - 15013.702: 98.4940% ( 3) 00:44:49.804 15013.702 - 15073.280: 98.5222% ( 3) 00:44:49.804 15073.280 - 15132.858: 98.5410% ( 2) 00:44:49.804 15132.858 - 15192.436: 98.5787% ( 4) 00:44:49.804 15192.436 - 15252.015: 98.6258% ( 5) 00:44:49.804 15252.015 - 15371.171: 98.7293% ( 11) 00:44:49.804 15371.171 - 15490.327: 98.7764% ( 5) 00:44:49.804 15490.327 - 15609.484: 98.7952% ( 2) 00:44:49.804 33840.407 - 34078.720: 98.8234% ( 3) 00:44:49.804 34078.720 - 34317.033: 98.8705% ( 5) 00:44:49.804 34317.033 - 34555.345: 98.9175% ( 5) 00:44:49.804 34555.345 - 34793.658: 98.9646% ( 5) 00:44:49.804 34793.658 - 35031.971: 99.0023% ( 4) 00:44:49.804 35031.971 - 35270.284: 99.0493% ( 5) 00:44:49.804 35270.284 - 35508.596: 99.0964% ( 5) 00:44:49.804 35508.596 - 35746.909: 99.1434% ( 5) 00:44:49.804 35746.909 - 35985.222: 99.1905% ( 5) 00:44:49.804 35985.222 - 36223.535: 99.2470% ( 6) 00:44:49.804 36223.535 - 36461.847: 99.2941% ( 5) 00:44:49.804 36461.847 - 36700.160: 99.3505% ( 6) 00:44:49.804 36700.160 - 36938.473: 99.3976% ( 5) 00:44:49.804 43134.604 - 43372.916: 99.4164% ( 2) 00:44:49.804 43372.916 - 43611.229: 99.4541% ( 4) 00:44:49.804 43611.229 - 43849.542: 99.5200% ( 7) 00:44:49.804 43849.542 - 44087.855: 99.5670% ( 5) 00:44:49.804 44087.855 - 44326.167: 99.6141% ( 5) 00:44:49.804 44326.167 - 44564.480: 99.6611% ( 5) 00:44:49.804 44564.480 - 44802.793: 99.7082% ( 5) 00:44:49.804 44802.793 - 45041.105: 99.7741% ( 7) 00:44:49.804 45041.105 - 45279.418: 99.8212% ( 5) 00:44:49.804 45279.418 - 45517.731: 99.8776% ( 6) 00:44:49.804 45517.731 - 45756.044: 99.9247% ( 5) 00:44:49.804 45756.044 - 45994.356: 99.9718% ( 5) 00:44:49.804 45994.356 - 46232.669: 100.0000% ( 3) 00:44:49.804 00:44:49.804 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:44:49.804 ============================================================================== 00:44:49.804 Range in us Cumulative IO count 00:44:49.804 10128.291 - 10187.869: 0.0094% ( 1) 00:44:49.804 10187.869 - 10247.447: 0.1412% ( 14) 00:44:49.804 10247.447 - 10307.025: 0.2730% ( 14) 00:44:49.804 10307.025 - 10366.604: 0.5271% ( 27) 00:44:49.804 10366.604 - 10426.182: 0.9413% ( 44) 00:44:49.804 10426.182 - 10485.760: 1.4684% ( 56) 00:44:49.804 10485.760 - 10545.338: 2.2967% ( 88) 00:44:49.804 10545.338 - 10604.916: 3.3603% ( 113) 00:44:49.804 10604.916 - 10664.495: 4.5745% ( 129) 00:44:49.804 10664.495 - 10724.073: 6.2123% ( 174) 00:44:49.804 10724.073 - 10783.651: 7.9443% ( 184) 00:44:49.804 10783.651 - 10843.229: 10.0998% ( 229) 00:44:49.804 10843.229 - 10902.807: 12.1517% ( 218) 00:44:49.804 10902.807 - 10962.385: 14.3543% ( 234) 00:44:49.804 10962.385 - 11021.964: 16.6510% ( 244) 00:44:49.804 11021.964 - 11081.542: 19.6913% ( 323) 00:44:49.804 11081.542 - 11141.120: 23.5505% ( 410) 00:44:49.804 11141.120 - 11200.698: 27.0802% ( 375) 00:44:49.804 11200.698 - 11260.276: 30.6099% ( 375) 00:44:49.804 11260.276 - 11319.855: 33.8573% ( 345) 00:44:49.804 11319.855 - 11379.433: 37.4623% ( 383) 00:44:49.804 11379.433 - 11439.011: 40.8697% ( 362) 00:44:49.804 11439.011 - 11498.589: 44.3148% ( 366) 00:44:49.804 11498.589 - 11558.167: 47.4021% ( 328) 00:44:49.804 11558.167 - 11617.745: 49.8588% ( 261) 00:44:49.804 11617.745 - 11677.324: 52.4567% ( 276) 00:44:49.804 11677.324 - 11736.902: 54.8381% ( 253) 00:44:49.804 11736.902 - 11796.480: 57.4172% ( 274) 00:44:49.804 11796.480 - 11856.058: 59.7233% ( 245) 00:44:49.804 11856.058 - 11915.636: 62.2929% ( 273) 00:44:49.804 11915.636 - 11975.215: 64.5237% ( 237) 00:44:49.804 11975.215 - 12034.793: 66.8204% ( 244) 00:44:49.804 12034.793 - 12094.371: 69.0700% ( 239) 00:44:49.804 12094.371 - 12153.949: 71.6303% ( 272) 00:44:49.804 12153.949 - 12213.527: 73.7575% ( 226) 00:44:49.804 12213.527 - 12273.105: 75.5553% ( 191) 00:44:49.804 12273.105 - 12332.684: 77.3532% ( 191) 00:44:49.804 12332.684 - 12392.262: 78.9533% ( 170) 00:44:49.804 12392.262 - 12451.840: 80.2334% ( 136) 00:44:49.804 12451.840 - 12511.418: 81.6924% ( 155) 00:44:49.804 12511.418 - 12570.996: 82.8313% ( 121) 00:44:49.804 12570.996 - 12630.575: 84.0926% ( 134) 00:44:49.804 12630.575 - 12690.153: 85.3069% ( 129) 00:44:49.804 12690.153 - 12749.731: 86.3046% ( 106) 00:44:49.804 12749.731 - 12809.309: 87.2553% ( 101) 00:44:49.804 12809.309 - 12868.887: 88.2812% ( 109) 00:44:49.804 12868.887 - 12928.465: 89.0248% ( 79) 00:44:49.804 12928.465 - 12988.044: 89.7684% ( 79) 00:44:49.804 12988.044 - 13047.622: 90.5968% ( 88) 00:44:49.804 13047.622 - 13107.200: 91.3309% ( 78) 00:44:49.805 13107.200 - 13166.778: 92.1028% ( 82) 00:44:49.805 13166.778 - 13226.356: 92.7617% ( 70) 00:44:49.805 13226.356 - 13285.935: 93.3358% ( 61) 00:44:49.805 13285.935 - 13345.513: 93.9759% ( 68) 00:44:49.805 13345.513 - 13405.091: 94.4371% ( 49) 00:44:49.805 13405.091 - 13464.669: 94.8136% ( 40) 00:44:49.805 13464.669 - 13524.247: 95.1713% ( 38) 00:44:49.805 13524.247 - 13583.825: 95.4725% ( 32) 00:44:49.805 13583.825 - 13643.404: 95.8114% ( 36) 00:44:49.805 13643.404 - 13702.982: 96.1879% ( 40) 00:44:49.805 13702.982 - 13762.560: 96.3761% ( 20) 00:44:49.805 13762.560 - 13822.138: 96.6114% ( 25) 00:44:49.805 13822.138 - 13881.716: 96.8185% ( 22) 00:44:49.805 13881.716 - 13941.295: 96.9785% ( 17) 00:44:49.805 13941.295 - 14000.873: 97.1480% ( 18) 00:44:49.805 14000.873 - 14060.451: 97.2515% ( 11) 00:44:49.805 14060.451 - 14120.029: 97.3645% ( 12) 00:44:49.805 14120.029 - 14179.607: 97.4680% ( 11) 00:44:49.805 14179.607 - 14239.185: 97.5527% ( 9) 00:44:49.805 14239.185 - 14298.764: 97.6374% ( 9) 00:44:49.805 14298.764 - 14358.342: 97.7221% ( 9) 00:44:49.805 14358.342 - 14417.920: 97.7692% ( 5) 00:44:49.805 14417.920 - 14477.498: 97.8163% ( 5) 00:44:49.805 14477.498 - 14537.076: 97.8539% ( 4) 00:44:49.805 14537.076 - 14596.655: 97.9198% ( 7) 00:44:49.805 14596.655 - 14656.233: 98.0422% ( 13) 00:44:49.805 14656.233 - 14715.811: 98.1739% ( 14) 00:44:49.805 14715.811 - 14775.389: 98.2775% ( 11) 00:44:49.805 14775.389 - 14834.967: 98.3810% ( 11) 00:44:49.805 14834.967 - 14894.545: 98.4657% ( 9) 00:44:49.805 14894.545 - 14954.124: 98.5599% ( 10) 00:44:49.805 14954.124 - 15013.702: 98.6069% ( 5) 00:44:49.805 15013.702 - 15073.280: 98.6634% ( 6) 00:44:49.805 15073.280 - 15132.858: 98.7199% ( 6) 00:44:49.805 15132.858 - 15192.436: 98.7669% ( 5) 00:44:49.805 15192.436 - 15252.015: 98.7952% ( 3) 00:44:49.805 31695.593 - 31933.905: 98.8046% ( 1) 00:44:49.805 31933.905 - 32172.218: 98.8517% ( 5) 00:44:49.805 32172.218 - 32410.531: 98.8987% ( 5) 00:44:49.805 32410.531 - 32648.844: 98.9552% ( 6) 00:44:49.805 32648.844 - 32887.156: 99.0023% ( 5) 00:44:49.805 32887.156 - 33125.469: 99.0493% ( 5) 00:44:49.805 33125.469 - 33363.782: 99.1058% ( 6) 00:44:49.805 33363.782 - 33602.095: 99.1529% ( 5) 00:44:49.805 33602.095 - 33840.407: 99.2093% ( 6) 00:44:49.805 33840.407 - 34078.720: 99.2658% ( 6) 00:44:49.805 34078.720 - 34317.033: 99.3035% ( 4) 00:44:49.805 34317.033 - 34555.345: 99.3505% ( 5) 00:44:49.805 34555.345 - 34793.658: 99.3976% ( 5) 00:44:49.805 40036.538 - 40274.851: 99.4352% ( 4) 00:44:49.805 40274.851 - 40513.164: 99.4823% ( 5) 00:44:49.805 40513.164 - 40751.476: 99.5482% ( 7) 00:44:49.805 40751.476 - 40989.789: 99.5858% ( 4) 00:44:49.805 40989.789 - 41228.102: 99.6423% ( 6) 00:44:49.805 41228.102 - 41466.415: 99.7082% ( 7) 00:44:49.805 41466.415 - 41704.727: 99.7647% ( 6) 00:44:49.805 41704.727 - 41943.040: 99.8117% ( 5) 00:44:49.805 41943.040 - 42181.353: 99.8776% ( 7) 00:44:49.805 42181.353 - 42419.665: 99.9247% ( 5) 00:44:49.805 42419.665 - 42657.978: 99.9812% ( 6) 00:44:49.805 42657.978 - 42896.291: 100.0000% ( 2) 00:44:49.805 00:44:49.805 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:44:49.805 ============================================================================== 00:44:49.805 Range in us Cumulative IO count 00:44:49.805 10068.713 - 10128.291: 0.0094% ( 1) 00:44:49.805 10187.869 - 10247.447: 0.1224% ( 12) 00:44:49.805 10247.447 - 10307.025: 0.2353% ( 12) 00:44:49.805 10307.025 - 10366.604: 0.4518% ( 23) 00:44:49.805 10366.604 - 10426.182: 0.8189% ( 39) 00:44:49.805 10426.182 - 10485.760: 1.3178% ( 53) 00:44:49.805 10485.760 - 10545.338: 1.9578% ( 68) 00:44:49.805 10545.338 - 10604.916: 3.0403% ( 115) 00:44:49.805 10604.916 - 10664.495: 4.2922% ( 133) 00:44:49.805 10664.495 - 10724.073: 6.0617% ( 188) 00:44:49.805 10724.073 - 10783.651: 7.7843% ( 183) 00:44:49.805 10783.651 - 10843.229: 9.6762% ( 201) 00:44:49.805 10843.229 - 10902.807: 12.1611% ( 264) 00:44:49.805 10902.807 - 10962.385: 14.4955% ( 248) 00:44:49.805 10962.385 - 11021.964: 17.3569% ( 304) 00:44:49.805 11021.964 - 11081.542: 20.5666% ( 341) 00:44:49.805 11081.542 - 11141.120: 23.7011% ( 333) 00:44:49.805 11141.120 - 11200.698: 27.3343% ( 386) 00:44:49.805 11200.698 - 11260.276: 31.2500% ( 416) 00:44:49.805 11260.276 - 11319.855: 34.5350% ( 349) 00:44:49.805 11319.855 - 11379.433: 38.0083% ( 369) 00:44:49.805 11379.433 - 11439.011: 41.0486% ( 323) 00:44:49.805 11439.011 - 11498.589: 43.7500% ( 287) 00:44:49.805 11498.589 - 11558.167: 46.3197% ( 273) 00:44:49.805 11558.167 - 11617.745: 49.0023% ( 285) 00:44:49.805 11617.745 - 11677.324: 52.0896% ( 328) 00:44:49.805 11677.324 - 11736.902: 55.0169% ( 311) 00:44:49.805 11736.902 - 11796.480: 57.7654% ( 292) 00:44:49.805 11796.480 - 11856.058: 60.6739% ( 309) 00:44:49.805 11856.058 - 11915.636: 63.2154% ( 270) 00:44:49.805 11915.636 - 11975.215: 65.6438% ( 258) 00:44:49.805 11975.215 - 12034.793: 68.5523% ( 309) 00:44:49.805 12034.793 - 12094.371: 70.9996% ( 260) 00:44:49.805 12094.371 - 12153.949: 72.8633% ( 198) 00:44:49.805 12153.949 - 12213.527: 75.1977% ( 248) 00:44:49.805 12213.527 - 12273.105: 77.1837% ( 211) 00:44:49.805 12273.105 - 12332.684: 78.6709% ( 158) 00:44:49.805 12332.684 - 12392.262: 80.4688% ( 191) 00:44:49.805 12392.262 - 12451.840: 82.0218% ( 165) 00:44:49.805 12451.840 - 12511.418: 83.6126% ( 169) 00:44:49.805 12511.418 - 12570.996: 84.7703% ( 123) 00:44:49.805 12570.996 - 12630.575: 85.6175% ( 90) 00:44:49.805 12630.575 - 12690.153: 86.3799% ( 81) 00:44:49.805 12690.153 - 12749.731: 87.2741% ( 95) 00:44:49.805 12749.731 - 12809.309: 88.2530% ( 104) 00:44:49.805 12809.309 - 12868.887: 89.1002% ( 90) 00:44:49.805 12868.887 - 12928.465: 89.8249% ( 77) 00:44:49.805 12928.465 - 12988.044: 90.3614% ( 57) 00:44:49.805 12988.044 - 13047.622: 90.8038% ( 47) 00:44:49.805 13047.622 - 13107.200: 91.4062% ( 64) 00:44:49.805 13107.200 - 13166.778: 91.9239% ( 55) 00:44:49.805 13166.778 - 13226.356: 92.4699% ( 58) 00:44:49.805 13226.356 - 13285.935: 92.9876% ( 55) 00:44:49.805 13285.935 - 13345.513: 93.4770% ( 52) 00:44:49.805 13345.513 - 13405.091: 94.0512% ( 61) 00:44:49.805 13405.091 - 13464.669: 94.6254% ( 61) 00:44:49.805 13464.669 - 13524.247: 95.0489% ( 45) 00:44:49.805 13524.247 - 13583.825: 95.4349% ( 41) 00:44:49.805 13583.825 - 13643.404: 95.7925% ( 38) 00:44:49.805 13643.404 - 13702.982: 96.1126% ( 34) 00:44:49.805 13702.982 - 13762.560: 96.2820% ( 18) 00:44:49.805 13762.560 - 13822.138: 96.4326% ( 16) 00:44:49.805 13822.138 - 13881.716: 96.5456% ( 12) 00:44:49.805 13881.716 - 13941.295: 96.6773% ( 14) 00:44:49.805 13941.295 - 14000.873: 96.8091% ( 14) 00:44:49.805 14000.873 - 14060.451: 96.9221% ( 12) 00:44:49.805 14060.451 - 14120.029: 97.0538% ( 14) 00:44:49.805 14120.029 - 14179.607: 97.1574% ( 11) 00:44:49.805 14179.607 - 14239.185: 97.2609% ( 11) 00:44:49.805 14239.185 - 14298.764: 97.4398% ( 19) 00:44:49.805 14298.764 - 14358.342: 97.5151% ( 8) 00:44:49.805 14358.342 - 14417.920: 97.5809% ( 7) 00:44:49.805 14417.920 - 14477.498: 97.6845% ( 11) 00:44:49.805 14477.498 - 14537.076: 97.7786% ( 10) 00:44:49.805 14537.076 - 14596.655: 97.9010% ( 13) 00:44:49.805 14596.655 - 14656.233: 97.9951% ( 10) 00:44:49.805 14656.233 - 14715.811: 98.0328% ( 4) 00:44:49.805 14715.811 - 14775.389: 98.0610% ( 3) 00:44:49.805 14775.389 - 14834.967: 98.0798% ( 2) 00:44:49.805 14834.967 - 14894.545: 98.1175% ( 4) 00:44:49.805 14894.545 - 14954.124: 98.1363% ( 2) 00:44:49.805 14954.124 - 15013.702: 98.1551% ( 2) 00:44:49.805 15013.702 - 15073.280: 98.1834% ( 3) 00:44:49.805 15073.280 - 15132.858: 98.1928% ( 1) 00:44:49.805 15728.640 - 15847.796: 98.2492% ( 6) 00:44:49.805 15847.796 - 15966.953: 98.3245% ( 8) 00:44:49.805 15966.953 - 16086.109: 98.3998% ( 8) 00:44:49.805 16086.109 - 16205.265: 98.6163% ( 23) 00:44:49.805 16205.265 - 16324.422: 98.6728% ( 6) 00:44:49.805 16324.422 - 16443.578: 98.7105% ( 4) 00:44:49.805 16443.578 - 16562.735: 98.7575% ( 5) 00:44:49.805 16562.735 - 16681.891: 98.7952% ( 4) 00:44:49.805 29431.622 - 29550.778: 98.8234% ( 3) 00:44:49.805 29550.778 - 29669.935: 98.8517% ( 3) 00:44:49.805 29669.935 - 29789.091: 98.8705% ( 2) 00:44:49.805 29789.091 - 29908.247: 98.8987% ( 3) 00:44:49.805 29908.247 - 30027.404: 98.9270% ( 3) 00:44:49.805 30027.404 - 30146.560: 98.9552% ( 3) 00:44:49.805 30146.560 - 30265.716: 98.9834% ( 3) 00:44:49.805 30265.716 - 30384.873: 99.0117% ( 3) 00:44:49.805 30384.873 - 30504.029: 99.0399% ( 3) 00:44:49.805 30504.029 - 30742.342: 99.0776% ( 4) 00:44:49.805 30742.342 - 30980.655: 99.1340% ( 6) 00:44:49.805 30980.655 - 31218.967: 99.1811% ( 5) 00:44:49.805 31218.967 - 31457.280: 99.2282% ( 5) 00:44:49.805 31457.280 - 31695.593: 99.2846% ( 6) 00:44:49.805 31695.593 - 31933.905: 99.3317% ( 5) 00:44:49.805 31933.905 - 32172.218: 99.3788% ( 5) 00:44:49.805 32172.218 - 32410.531: 99.3976% ( 2) 00:44:49.805 37415.098 - 37653.411: 99.4164% ( 2) 00:44:49.805 37653.411 - 37891.724: 99.4635% ( 5) 00:44:49.805 37891.724 - 38130.036: 99.5200% ( 6) 00:44:49.805 38130.036 - 38368.349: 99.5670% ( 5) 00:44:49.805 38368.349 - 38606.662: 99.6141% ( 5) 00:44:49.805 38606.662 - 38844.975: 99.6611% ( 5) 00:44:49.805 38844.975 - 39083.287: 99.7082% ( 5) 00:44:49.805 39083.287 - 39321.600: 99.7647% ( 6) 00:44:49.805 39321.600 - 39559.913: 99.8212% ( 6) 00:44:49.805 39559.913 - 39798.225: 99.8776% ( 6) 00:44:49.805 39798.225 - 40036.538: 99.9341% ( 6) 00:44:49.805 40036.538 - 40274.851: 99.9812% ( 5) 00:44:49.805 40274.851 - 40513.164: 100.0000% ( 2) 00:44:49.805 00:44:49.805 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:44:49.805 ============================================================================== 00:44:49.805 Range in us Cumulative IO count 00:44:49.805 9949.556 - 10009.135: 0.0188% ( 2) 00:44:49.805 10009.135 - 10068.713: 0.0565% ( 4) 00:44:49.805 10068.713 - 10128.291: 0.1130% ( 6) 00:44:49.806 10128.291 - 10187.869: 0.1694% ( 6) 00:44:49.806 10187.869 - 10247.447: 0.2071% ( 4) 00:44:49.806 10247.447 - 10307.025: 0.2918% ( 9) 00:44:49.806 10307.025 - 10366.604: 0.5365% ( 26) 00:44:49.806 10366.604 - 10426.182: 0.9036% ( 39) 00:44:49.806 10426.182 - 10485.760: 1.4590% ( 59) 00:44:49.806 10485.760 - 10545.338: 2.1084% ( 69) 00:44:49.806 10545.338 - 10604.916: 3.1156% ( 107) 00:44:49.806 10604.916 - 10664.495: 4.3110% ( 127) 00:44:49.806 10664.495 - 10724.073: 5.8735% ( 166) 00:44:49.806 10724.073 - 10783.651: 7.4642% ( 169) 00:44:49.806 10783.651 - 10843.229: 9.2244% ( 187) 00:44:49.806 10843.229 - 10902.807: 11.6529% ( 258) 00:44:49.806 10902.807 - 10962.385: 14.0060% ( 250) 00:44:49.806 10962.385 - 11021.964: 16.6980% ( 286) 00:44:49.806 11021.964 - 11081.542: 19.3995% ( 287) 00:44:49.806 11081.542 - 11141.120: 22.4586% ( 325) 00:44:49.806 11141.120 - 11200.698: 25.5930% ( 333) 00:44:49.806 11200.698 - 11260.276: 29.1510% ( 378) 00:44:49.806 11260.276 - 11319.855: 32.9537% ( 404) 00:44:49.806 11319.855 - 11379.433: 36.3517% ( 361) 00:44:49.806 11379.433 - 11439.011: 39.3825% ( 322) 00:44:49.806 11439.011 - 11498.589: 42.6864% ( 351) 00:44:49.806 11498.589 - 11558.167: 45.6514% ( 315) 00:44:49.806 11558.167 - 11617.745: 48.6822% ( 322) 00:44:49.806 11617.745 - 11677.324: 51.4590% ( 295) 00:44:49.806 11677.324 - 11736.902: 54.2733% ( 299) 00:44:49.806 11736.902 - 11796.480: 57.5772% ( 351) 00:44:49.806 11796.480 - 11856.058: 60.0904% ( 267) 00:44:49.806 11856.058 - 11915.636: 62.9424% ( 303) 00:44:49.806 11915.636 - 11975.215: 65.9544% ( 320) 00:44:49.806 11975.215 - 12034.793: 68.7406% ( 296) 00:44:49.806 12034.793 - 12094.371: 71.2349% ( 265) 00:44:49.806 12094.371 - 12153.949: 73.6163% ( 253) 00:44:49.806 12153.949 - 12213.527: 75.7530% ( 227) 00:44:49.806 12213.527 - 12273.105: 77.6920% ( 206) 00:44:49.806 12273.105 - 12332.684: 79.2922% ( 170) 00:44:49.806 12332.684 - 12392.262: 80.8076% ( 161) 00:44:49.806 12392.262 - 12451.840: 82.1913% ( 147) 00:44:49.806 12451.840 - 12511.418: 83.3961% ( 128) 00:44:49.806 12511.418 - 12570.996: 84.4221% ( 109) 00:44:49.806 12570.996 - 12630.575: 85.4104% ( 105) 00:44:49.806 12630.575 - 12690.153: 86.3705% ( 102) 00:44:49.806 12690.153 - 12749.731: 87.3400% ( 103) 00:44:49.806 12749.731 - 12809.309: 88.0836% ( 79) 00:44:49.806 12809.309 - 12868.887: 88.9213% ( 89) 00:44:49.806 12868.887 - 12928.465: 89.7120% ( 84) 00:44:49.806 12928.465 - 12988.044: 90.3991% ( 73) 00:44:49.806 12988.044 - 13047.622: 91.0015% ( 64) 00:44:49.806 13047.622 - 13107.200: 91.4533% ( 48) 00:44:49.806 13107.200 - 13166.778: 92.0369% ( 62) 00:44:49.806 13166.778 - 13226.356: 92.5640% ( 56) 00:44:49.806 13226.356 - 13285.935: 92.9782% ( 44) 00:44:49.806 13285.935 - 13345.513: 93.3735% ( 42) 00:44:49.806 13345.513 - 13405.091: 93.7406% ( 39) 00:44:49.806 13405.091 - 13464.669: 94.0418% ( 32) 00:44:49.806 13464.669 - 13524.247: 94.3336% ( 31) 00:44:49.806 13524.247 - 13583.825: 94.5689% ( 25) 00:44:49.806 13583.825 - 13643.404: 94.8701% ( 32) 00:44:49.806 13643.404 - 13702.982: 95.1525% ( 30) 00:44:49.806 13702.982 - 13762.560: 95.3972% ( 26) 00:44:49.806 13762.560 - 13822.138: 95.5855% ( 20) 00:44:49.806 13822.138 - 13881.716: 95.7643% ( 19) 00:44:49.806 13881.716 - 13941.295: 96.0373% ( 29) 00:44:49.806 13941.295 - 14000.873: 96.3008% ( 28) 00:44:49.806 14000.873 - 14060.451: 96.5738% ( 29) 00:44:49.806 14060.451 - 14120.029: 96.7620% ( 20) 00:44:49.806 14120.029 - 14179.607: 96.8750% ( 12) 00:44:49.806 14179.607 - 14239.185: 97.0444% ( 18) 00:44:49.806 14239.185 - 14298.764: 97.1762% ( 14) 00:44:49.806 14298.764 - 14358.342: 97.3080% ( 14) 00:44:49.806 14358.342 - 14417.920: 97.5056% ( 21) 00:44:49.806 14417.920 - 14477.498: 97.7316% ( 24) 00:44:49.806 14477.498 - 14537.076: 98.0045% ( 29) 00:44:49.806 14537.076 - 14596.655: 98.2492% ( 26) 00:44:49.806 14596.655 - 14656.233: 98.3622% ( 12) 00:44:49.806 14656.233 - 14715.811: 98.4563% ( 10) 00:44:49.806 14715.811 - 14775.389: 98.5410% ( 9) 00:44:49.806 14775.389 - 14834.967: 98.6163% ( 8) 00:44:49.806 14834.967 - 14894.545: 98.6916% ( 8) 00:44:49.806 14894.545 - 14954.124: 98.7387% ( 5) 00:44:49.806 14954.124 - 15013.702: 98.7764% ( 4) 00:44:49.806 15013.702 - 15073.280: 98.7952% ( 2) 00:44:49.806 26691.025 - 26810.182: 98.8893% ( 10) 00:44:49.806 26810.182 - 26929.338: 98.9364% ( 5) 00:44:49.806 26929.338 - 27048.495: 98.9646% ( 3) 00:44:49.806 27048.495 - 27167.651: 98.9928% ( 3) 00:44:49.806 27167.651 - 27286.807: 99.0117% ( 2) 00:44:49.806 27286.807 - 27405.964: 99.0399% ( 3) 00:44:49.806 27405.964 - 27525.120: 99.0587% ( 2) 00:44:49.806 27525.120 - 27644.276: 99.0870% ( 3) 00:44:49.806 27644.276 - 27763.433: 99.1152% ( 3) 00:44:49.806 27763.433 - 27882.589: 99.1340% ( 2) 00:44:49.806 27882.589 - 28001.745: 99.1623% ( 3) 00:44:49.806 28001.745 - 28120.902: 99.1811% ( 2) 00:44:49.806 28120.902 - 28240.058: 99.2093% ( 3) 00:44:49.806 28240.058 - 28359.215: 99.2376% ( 3) 00:44:49.806 28359.215 - 28478.371: 99.2658% ( 3) 00:44:49.806 28478.371 - 28597.527: 99.2846% ( 2) 00:44:49.806 28597.527 - 28716.684: 99.3129% ( 3) 00:44:49.806 28716.684 - 28835.840: 99.3317% ( 2) 00:44:49.806 28835.840 - 28954.996: 99.3599% ( 3) 00:44:49.806 28954.996 - 29074.153: 99.3882% ( 3) 00:44:49.806 29074.153 - 29193.309: 99.3976% ( 1) 00:44:49.806 34317.033 - 34555.345: 99.4352% ( 4) 00:44:49.806 34555.345 - 34793.658: 99.4917% ( 6) 00:44:49.806 34793.658 - 35031.971: 99.5482% ( 6) 00:44:49.806 35031.971 - 35270.284: 99.6047% ( 6) 00:44:49.806 35270.284 - 35508.596: 99.6517% ( 5) 00:44:49.806 35508.596 - 35746.909: 99.7082% ( 6) 00:44:49.806 35746.909 - 35985.222: 99.7647% ( 6) 00:44:49.806 35985.222 - 36223.535: 99.8212% ( 6) 00:44:49.806 36223.535 - 36461.847: 99.8776% ( 6) 00:44:49.806 36461.847 - 36700.160: 99.9341% ( 6) 00:44:49.806 36700.160 - 36938.473: 99.9812% ( 5) 00:44:49.806 36938.473 - 37176.785: 100.0000% ( 2) 00:44:49.806 00:44:49.806 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:44:49.806 ============================================================================== 00:44:49.806 Range in us Cumulative IO count 00:44:49.806 10187.869 - 10247.447: 0.0188% ( 2) 00:44:49.806 10247.447 - 10307.025: 0.2071% ( 20) 00:44:49.806 10307.025 - 10366.604: 0.4989% ( 31) 00:44:49.806 10366.604 - 10426.182: 0.9413% ( 47) 00:44:49.806 10426.182 - 10485.760: 1.6190% ( 72) 00:44:49.806 10485.760 - 10545.338: 2.3532% ( 78) 00:44:49.806 10545.338 - 10604.916: 3.4544% ( 117) 00:44:49.806 10604.916 - 10664.495: 4.7252% ( 135) 00:44:49.806 10664.495 - 10724.073: 6.2406% ( 161) 00:44:49.806 10724.073 - 10783.651: 7.8878% ( 175) 00:44:49.806 10783.651 - 10843.229: 9.9021% ( 214) 00:44:49.806 10843.229 - 10902.807: 11.9729% ( 220) 00:44:49.806 10902.807 - 10962.385: 14.5425% ( 273) 00:44:49.806 10962.385 - 11021.964: 17.2628% ( 289) 00:44:49.806 11021.964 - 11081.542: 20.8490% ( 381) 00:44:49.806 11081.542 - 11141.120: 24.5576% ( 394) 00:44:49.806 11141.120 - 11200.698: 27.5979% ( 323) 00:44:49.806 11200.698 - 11260.276: 30.9394% ( 355) 00:44:49.806 11260.276 - 11319.855: 34.3562% ( 363) 00:44:49.806 11319.855 - 11379.433: 37.5000% ( 334) 00:44:49.806 11379.433 - 11439.011: 40.0791% ( 274) 00:44:49.806 11439.011 - 11498.589: 42.8840% ( 298) 00:44:49.806 11498.589 - 11558.167: 45.4255% ( 270) 00:44:49.806 11558.167 - 11617.745: 48.1363% ( 288) 00:44:49.806 11617.745 - 11677.324: 50.8377% ( 287) 00:44:49.806 11677.324 - 11736.902: 53.8498% ( 320) 00:44:49.806 11736.902 - 11796.480: 56.6359% ( 296) 00:44:49.806 11796.480 - 11856.058: 59.9962% ( 357) 00:44:49.806 11856.058 - 11915.636: 62.8106% ( 299) 00:44:49.806 11915.636 - 11975.215: 65.2108% ( 255) 00:44:49.806 11975.215 - 12034.793: 67.6581% ( 260) 00:44:49.806 12034.793 - 12094.371: 70.0866% ( 258) 00:44:49.806 12094.371 - 12153.949: 72.3550% ( 241) 00:44:49.806 12153.949 - 12213.527: 74.6141% ( 240) 00:44:49.806 12213.527 - 12273.105: 76.4684% ( 197) 00:44:49.806 12273.105 - 12332.684: 78.2380% ( 188) 00:44:49.806 12332.684 - 12392.262: 79.6216% ( 147) 00:44:49.806 12392.262 - 12451.840: 81.2688% ( 175) 00:44:49.806 12451.840 - 12511.418: 82.5489% ( 136) 00:44:49.806 12511.418 - 12570.996: 83.7538% ( 128) 00:44:49.806 12570.996 - 12630.575: 84.9774% ( 130) 00:44:49.806 12630.575 - 12690.153: 86.2858% ( 139) 00:44:49.806 12690.153 - 12749.731: 87.6318% ( 143) 00:44:49.806 12749.731 - 12809.309: 88.5825% ( 101) 00:44:49.806 12809.309 - 12868.887: 89.2508% ( 71) 00:44:49.806 12868.887 - 12928.465: 89.9191% ( 71) 00:44:49.806 12928.465 - 12988.044: 90.5591% ( 68) 00:44:49.806 12988.044 - 13047.622: 91.0015% ( 47) 00:44:49.806 13047.622 - 13107.200: 91.4627% ( 49) 00:44:49.806 13107.200 - 13166.778: 91.8863% ( 45) 00:44:49.806 13166.778 - 13226.356: 92.2534% ( 39) 00:44:49.806 13226.356 - 13285.935: 92.6675% ( 44) 00:44:49.806 13285.935 - 13345.513: 93.1570% ( 52) 00:44:49.806 13345.513 - 13405.091: 93.5429% ( 41) 00:44:49.806 13405.091 - 13464.669: 93.8253% ( 30) 00:44:49.806 13464.669 - 13524.247: 94.1077% ( 30) 00:44:49.806 13524.247 - 13583.825: 94.3806% ( 29) 00:44:49.806 13583.825 - 13643.404: 94.6160% ( 25) 00:44:49.806 13643.404 - 13702.982: 94.8230% ( 22) 00:44:49.806 13702.982 - 13762.560: 95.0395% ( 23) 00:44:49.806 13762.560 - 13822.138: 95.2184% ( 19) 00:44:49.806 13822.138 - 13881.716: 95.3972% ( 19) 00:44:49.806 13881.716 - 13941.295: 95.6514% ( 27) 00:44:49.806 13941.295 - 14000.873: 95.8584% ( 22) 00:44:49.806 14000.873 - 14060.451: 96.0749% ( 23) 00:44:49.806 14060.451 - 14120.029: 96.3102% ( 25) 00:44:49.806 14120.029 - 14179.607: 96.5079% ( 21) 00:44:49.806 14179.607 - 14239.185: 96.6303% ( 13) 00:44:49.806 14239.185 - 14298.764: 96.8185% ( 20) 00:44:49.806 14298.764 - 14358.342: 97.0821% ( 28) 00:44:49.806 14358.342 - 14417.920: 97.2892% ( 22) 00:44:49.806 14417.920 - 14477.498: 97.5151% ( 24) 00:44:49.806 14477.498 - 14537.076: 97.7221% ( 22) 00:44:49.806 14537.076 - 14596.655: 97.8916% ( 18) 00:44:49.806 14596.655 - 14656.233: 98.3057% ( 44) 00:44:49.807 14656.233 - 14715.811: 98.4469% ( 15) 00:44:49.807 14715.811 - 14775.389: 98.5316% ( 9) 00:44:49.807 14775.389 - 14834.967: 98.5975% ( 7) 00:44:49.807 14834.967 - 14894.545: 98.6634% ( 7) 00:44:49.807 14894.545 - 14954.124: 98.7293% ( 7) 00:44:49.807 14954.124 - 15013.702: 98.7669% ( 4) 00:44:49.807 15013.702 - 15073.280: 98.7952% ( 3) 00:44:49.807 23473.804 - 23592.960: 98.8046% ( 1) 00:44:49.807 23592.960 - 23712.116: 98.9175% ( 12) 00:44:49.807 23712.116 - 23831.273: 98.9458% ( 3) 00:44:49.807 23831.273 - 23950.429: 98.9740% ( 3) 00:44:49.807 23950.429 - 24069.585: 98.9928% ( 2) 00:44:49.807 24069.585 - 24188.742: 99.0211% ( 3) 00:44:49.807 24188.742 - 24307.898: 99.0399% ( 2) 00:44:49.807 24307.898 - 24427.055: 99.0587% ( 2) 00:44:49.807 24427.055 - 24546.211: 99.0964% ( 4) 00:44:49.807 24546.211 - 24665.367: 99.1152% ( 2) 00:44:49.807 24665.367 - 24784.524: 99.1434% ( 3) 00:44:49.807 24784.524 - 24903.680: 99.1623% ( 2) 00:44:49.807 24903.680 - 25022.836: 99.1905% ( 3) 00:44:49.807 25022.836 - 25141.993: 99.2188% ( 3) 00:44:49.807 25141.993 - 25261.149: 99.2470% ( 3) 00:44:49.807 25261.149 - 25380.305: 99.2752% ( 3) 00:44:49.807 25380.305 - 25499.462: 99.3035% ( 3) 00:44:49.807 25499.462 - 25618.618: 99.3317% ( 3) 00:44:49.807 25618.618 - 25737.775: 99.3505% ( 2) 00:44:49.807 25737.775 - 25856.931: 99.3788% ( 3) 00:44:49.807 25856.931 - 25976.087: 99.3976% ( 2) 00:44:49.807 31218.967 - 31457.280: 99.4258% ( 3) 00:44:49.807 31457.280 - 31695.593: 99.4823% ( 6) 00:44:49.807 31695.593 - 31933.905: 99.5482% ( 7) 00:44:49.807 31933.905 - 32172.218: 99.5953% ( 5) 00:44:49.807 32172.218 - 32410.531: 99.6423% ( 5) 00:44:49.807 32410.531 - 32648.844: 99.6894% ( 5) 00:44:49.807 32648.844 - 32887.156: 99.7459% ( 6) 00:44:49.807 32887.156 - 33125.469: 99.7929% ( 5) 00:44:49.807 33125.469 - 33363.782: 99.8494% ( 6) 00:44:49.807 33363.782 - 33602.095: 99.9059% ( 6) 00:44:49.807 33602.095 - 33840.407: 99.9529% ( 5) 00:44:49.807 33840.407 - 34078.720: 100.0000% ( 5) 00:44:49.807 00:44:49.807 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:44:49.807 ============================================================================== 00:44:49.807 Range in us Cumulative IO count 00:44:49.807 10068.713 - 10128.291: 0.0094% ( 1) 00:44:49.807 10128.291 - 10187.869: 0.0659% ( 6) 00:44:49.807 10187.869 - 10247.447: 0.1600% ( 10) 00:44:49.807 10247.447 - 10307.025: 0.2353% ( 8) 00:44:49.807 10307.025 - 10366.604: 0.3671% ( 14) 00:44:49.807 10366.604 - 10426.182: 0.9130% ( 58) 00:44:49.807 10426.182 - 10485.760: 1.5437% ( 67) 00:44:49.807 10485.760 - 10545.338: 2.1837% ( 68) 00:44:49.807 10545.338 - 10604.916: 2.9367% ( 80) 00:44:49.807 10604.916 - 10664.495: 4.4051% ( 156) 00:44:49.807 10664.495 - 10724.073: 6.2782% ( 199) 00:44:49.807 10724.073 - 10783.651: 8.0949% ( 193) 00:44:49.807 10783.651 - 10843.229: 10.1186% ( 215) 00:44:49.807 10843.229 - 10902.807: 12.6035% ( 264) 00:44:49.807 10902.807 - 10962.385: 14.7684% ( 230) 00:44:49.807 10962.385 - 11021.964: 17.6770% ( 309) 00:44:49.807 11021.964 - 11081.542: 20.6137% ( 312) 00:44:49.807 11081.542 - 11141.120: 24.0964% ( 370) 00:44:49.807 11141.120 - 11200.698: 27.4379% ( 355) 00:44:49.807 11200.698 - 11260.276: 30.6570% ( 342) 00:44:49.807 11260.276 - 11319.855: 33.9608% ( 351) 00:44:49.807 11319.855 - 11379.433: 37.1329% ( 337) 00:44:49.807 11379.433 - 11439.011: 40.2391% ( 330) 00:44:49.807 11439.011 - 11498.589: 43.5335% ( 350) 00:44:49.807 11498.589 - 11558.167: 46.1220% ( 275) 00:44:49.807 11558.167 - 11617.745: 48.9458% ( 300) 00:44:49.807 11617.745 - 11677.324: 51.6378% ( 286) 00:44:49.807 11677.324 - 11736.902: 54.1604% ( 268) 00:44:49.807 11736.902 - 11796.480: 56.7206% ( 272) 00:44:49.807 11796.480 - 11856.058: 59.0456% ( 247) 00:44:49.807 11856.058 - 11915.636: 61.5493% ( 266) 00:44:49.807 11915.636 - 11975.215: 63.9684% ( 257) 00:44:49.807 11975.215 - 12034.793: 66.6510% ( 285) 00:44:49.807 12034.793 - 12094.371: 69.5595% ( 309) 00:44:49.807 12094.371 - 12153.949: 71.9785% ( 257) 00:44:49.807 12153.949 - 12213.527: 74.2564% ( 242) 00:44:49.807 12213.527 - 12273.105: 76.4401% ( 232) 00:44:49.807 12273.105 - 12332.684: 78.2097% ( 188) 00:44:49.807 12332.684 - 12392.262: 79.8381% ( 173) 00:44:49.807 12392.262 - 12451.840: 81.1935% ( 144) 00:44:49.807 12451.840 - 12511.418: 82.6054% ( 150) 00:44:49.807 12511.418 - 12570.996: 83.7067% ( 117) 00:44:49.807 12570.996 - 12630.575: 84.5633% ( 91) 00:44:49.807 12630.575 - 12690.153: 85.5422% ( 104) 00:44:49.807 12690.153 - 12749.731: 86.6623% ( 119) 00:44:49.807 12749.731 - 12809.309: 87.5282% ( 92) 00:44:49.807 12809.309 - 12868.887: 88.1871% ( 70) 00:44:49.807 12868.887 - 12928.465: 88.9119% ( 77) 00:44:49.807 12928.465 - 12988.044: 89.5331% ( 66) 00:44:49.807 12988.044 - 13047.622: 90.2203% ( 73) 00:44:49.807 13047.622 - 13107.200: 90.8980% ( 72) 00:44:49.807 13107.200 - 13166.778: 91.6133% ( 76) 00:44:49.807 13166.778 - 13226.356: 92.3099% ( 74) 00:44:49.807 13226.356 - 13285.935: 92.8370% ( 56) 00:44:49.807 13285.935 - 13345.513: 93.2417% ( 43) 00:44:49.807 13345.513 - 13405.091: 93.7123% ( 50) 00:44:49.807 13405.091 - 13464.669: 94.1171% ( 43) 00:44:49.807 13464.669 - 13524.247: 94.4842% ( 39) 00:44:49.807 13524.247 - 13583.825: 94.8136% ( 35) 00:44:49.807 13583.825 - 13643.404: 95.1619% ( 37) 00:44:49.807 13643.404 - 13702.982: 95.4443% ( 30) 00:44:49.807 13702.982 - 13762.560: 95.6514% ( 22) 00:44:49.807 13762.560 - 13822.138: 95.8208% ( 18) 00:44:49.807 13822.138 - 13881.716: 95.9526% ( 14) 00:44:49.807 13881.716 - 13941.295: 96.0561% ( 11) 00:44:49.807 13941.295 - 14000.873: 96.1220% ( 7) 00:44:49.807 14000.873 - 14060.451: 96.2067% ( 9) 00:44:49.807 14060.451 - 14120.029: 96.3008% ( 10) 00:44:49.807 14120.029 - 14179.607: 96.4232% ( 13) 00:44:49.807 14179.607 - 14239.185: 96.5456% ( 13) 00:44:49.807 14239.185 - 14298.764: 96.8373% ( 31) 00:44:49.807 14298.764 - 14358.342: 97.1386% ( 32) 00:44:49.807 14358.342 - 14417.920: 97.2797% ( 15) 00:44:49.807 14417.920 - 14477.498: 97.4303% ( 16) 00:44:49.807 14477.498 - 14537.076: 97.5621% ( 14) 00:44:49.807 14537.076 - 14596.655: 97.7598% ( 21) 00:44:49.807 14596.655 - 14656.233: 97.9763% ( 23) 00:44:49.807 14656.233 - 14715.811: 98.1363% ( 17) 00:44:49.807 14715.811 - 14775.389: 98.2116% ( 8) 00:44:49.807 14775.389 - 14834.967: 98.2963% ( 9) 00:44:49.807 14834.967 - 14894.545: 98.4281% ( 14) 00:44:49.807 14894.545 - 14954.124: 98.5505% ( 13) 00:44:49.807 14954.124 - 15013.702: 98.6634% ( 12) 00:44:49.807 15013.702 - 15073.280: 98.7105% ( 5) 00:44:49.807 15073.280 - 15132.858: 98.7387% ( 3) 00:44:49.807 15132.858 - 15192.436: 98.7858% ( 5) 00:44:49.807 15192.436 - 15252.015: 98.7952% ( 1) 00:44:49.807 20971.520 - 21090.676: 98.8422% ( 5) 00:44:49.807 21090.676 - 21209.833: 98.9928% ( 16) 00:44:49.807 21209.833 - 21328.989: 99.0117% ( 2) 00:44:49.807 21328.989 - 21448.145: 99.0399% ( 3) 00:44:49.807 21448.145 - 21567.302: 99.0681% ( 3) 00:44:49.807 21567.302 - 21686.458: 99.0870% ( 2) 00:44:49.807 21686.458 - 21805.615: 99.1152% ( 3) 00:44:49.807 21805.615 - 21924.771: 99.1434% ( 3) 00:44:49.807 21924.771 - 22043.927: 99.1623% ( 2) 00:44:49.807 22043.927 - 22163.084: 99.1905% ( 3) 00:44:49.807 22163.084 - 22282.240: 99.2188% ( 3) 00:44:49.807 22282.240 - 22401.396: 99.2376% ( 2) 00:44:49.807 22401.396 - 22520.553: 99.2658% ( 3) 00:44:49.807 22520.553 - 22639.709: 99.2846% ( 2) 00:44:49.807 22639.709 - 22758.865: 99.3129% ( 3) 00:44:49.807 22758.865 - 22878.022: 99.3317% ( 2) 00:44:49.807 22878.022 - 22997.178: 99.3599% ( 3) 00:44:49.807 22997.178 - 23116.335: 99.3788% ( 2) 00:44:49.807 23116.335 - 23235.491: 99.3976% ( 2) 00:44:49.807 26452.713 - 26571.869: 99.5576% ( 17) 00:44:49.807 28597.527 - 28716.684: 99.5858% ( 3) 00:44:49.807 28716.684 - 28835.840: 99.6047% ( 2) 00:44:49.807 28835.840 - 28954.996: 99.6423% ( 4) 00:44:49.807 28954.996 - 29074.153: 99.6611% ( 2) 00:44:49.807 29074.153 - 29193.309: 99.6800% ( 2) 00:44:49.807 29193.309 - 29312.465: 99.7082% ( 3) 00:44:49.807 29312.465 - 29431.622: 99.7364% ( 3) 00:44:49.807 29431.622 - 29550.778: 99.7553% ( 2) 00:44:49.807 29550.778 - 29669.935: 99.7835% ( 3) 00:44:49.807 29669.935 - 29789.091: 99.8023% ( 2) 00:44:49.807 29789.091 - 29908.247: 99.8306% ( 3) 00:44:49.807 29908.247 - 30027.404: 99.8494% ( 2) 00:44:49.807 30027.404 - 30146.560: 99.8776% ( 3) 00:44:49.807 30146.560 - 30265.716: 99.9059% ( 3) 00:44:49.807 30265.716 - 30384.873: 99.9247% ( 2) 00:44:49.807 30384.873 - 30504.029: 99.9529% ( 3) 00:44:49.807 30504.029 - 30742.342: 100.0000% ( 5) 00:44:49.807 00:44:50.066 09:55:56 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:44:50.066 00:44:50.066 real 0m2.782s 00:44:50.066 user 0m2.345s 00:44:50.066 sys 0m0.306s 00:44:50.066 09:55:56 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:50.066 09:55:56 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:44:50.066 ************************************ 00:44:50.066 END TEST nvme_perf 00:44:50.066 ************************************ 00:44:50.066 09:55:56 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:44:50.066 09:55:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:50.066 09:55:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:50.066 09:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:50.066 ************************************ 00:44:50.066 START TEST nvme_hello_world 00:44:50.066 ************************************ 00:44:50.066 09:55:56 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:44:50.325 Initializing NVMe Controllers 00:44:50.325 Attached to 0000:00:10.0 00:44:50.325 Namespace ID: 1 size: 6GB 00:44:50.325 Attached to 0000:00:11.0 00:44:50.325 Namespace ID: 1 size: 5GB 00:44:50.325 Attached to 0000:00:13.0 00:44:50.325 Namespace ID: 1 size: 1GB 00:44:50.325 Attached to 0000:00:12.0 00:44:50.325 Namespace ID: 1 size: 4GB 00:44:50.325 Namespace ID: 2 size: 4GB 00:44:50.325 Namespace ID: 3 size: 4GB 00:44:50.325 Initialization complete. 00:44:50.325 INFO: using host memory buffer for IO 00:44:50.325 Hello world! 00:44:50.325 INFO: using host memory buffer for IO 00:44:50.325 Hello world! 00:44:50.325 INFO: using host memory buffer for IO 00:44:50.325 Hello world! 00:44:50.325 INFO: using host memory buffer for IO 00:44:50.325 Hello world! 00:44:50.325 INFO: using host memory buffer for IO 00:44:50.325 Hello world! 00:44:50.325 INFO: using host memory buffer for IO 00:44:50.325 Hello world! 00:44:50.325 ************************************ 00:44:50.325 END TEST nvme_hello_world 00:44:50.325 ************************************ 00:44:50.325 00:44:50.325 real 0m0.340s 00:44:50.325 user 0m0.142s 00:44:50.325 sys 0m0.155s 00:44:50.325 09:55:57 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:50.325 09:55:57 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:44:50.325 09:55:57 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:44:50.325 09:55:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:50.325 09:55:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:50.325 09:55:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:50.325 ************************************ 00:44:50.325 START TEST nvme_sgl 00:44:50.325 ************************************ 00:44:50.325 09:55:57 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:44:50.584 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:44:50.584 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:44:50.584 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:44:50.843 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:44:50.843 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:44:50.843 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:44:50.843 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:44:50.843 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:44:50.843 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:44:50.843 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:44:50.843 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:44:50.843 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:44:50.843 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:44:50.843 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:44:50.843 NVMe Readv/Writev Request test 00:44:50.843 Attached to 0000:00:10.0 00:44:50.843 Attached to 0000:00:11.0 00:44:50.843 Attached to 0000:00:13.0 00:44:50.843 Attached to 0000:00:12.0 00:44:50.843 0000:00:10.0: build_io_request_2 test passed 00:44:50.843 0000:00:10.0: build_io_request_4 test passed 00:44:50.843 0000:00:10.0: build_io_request_5 test passed 00:44:50.843 0000:00:10.0: build_io_request_6 test passed 00:44:50.843 0000:00:10.0: build_io_request_7 test passed 00:44:50.843 0000:00:10.0: build_io_request_10 test passed 00:44:50.843 0000:00:11.0: build_io_request_2 test passed 00:44:50.843 0000:00:11.0: build_io_request_4 test passed 00:44:50.843 0000:00:11.0: build_io_request_5 test passed 00:44:50.843 0000:00:11.0: build_io_request_6 test passed 00:44:50.843 0000:00:11.0: build_io_request_7 test passed 00:44:50.843 0000:00:11.0: build_io_request_10 test passed 00:44:50.843 Cleaning up... 00:44:50.843 00:44:50.843 real 0m0.458s 00:44:50.843 user 0m0.225s 00:44:50.843 sys 0m0.174s 00:44:50.843 09:55:57 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:50.843 09:55:57 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:44:50.843 ************************************ 00:44:50.843 END TEST nvme_sgl 00:44:50.843 ************************************ 00:44:50.843 09:55:57 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:44:50.843 09:55:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:50.843 09:55:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:50.843 09:55:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:50.843 ************************************ 00:44:50.843 START TEST nvme_e2edp 00:44:50.843 ************************************ 00:44:50.843 09:55:57 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:44:51.101 NVMe Write/Read with End-to-End data protection test 00:44:51.101 Attached to 0000:00:10.0 00:44:51.101 Attached to 0000:00:11.0 00:44:51.101 Attached to 0000:00:13.0 00:44:51.101 Attached to 0000:00:12.0 00:44:51.101 Cleaning up... 00:44:51.360 ************************************ 00:44:51.360 END TEST nvme_e2edp 00:44:51.360 ************************************ 00:44:51.360 00:44:51.360 real 0m0.339s 00:44:51.360 user 0m0.136s 00:44:51.360 sys 0m0.157s 00:44:51.360 09:55:58 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:51.360 09:55:58 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:44:51.360 09:55:58 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:44:51.360 09:55:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:51.360 09:55:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:51.360 09:55:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:51.360 ************************************ 00:44:51.360 START TEST nvme_reserve 00:44:51.360 ************************************ 00:44:51.360 09:55:58 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:44:51.618 ===================================================== 00:44:51.618 NVMe Controller at PCI bus 0, device 16, function 0 00:44:51.618 ===================================================== 00:44:51.618 Reservations: Not Supported 00:44:51.618 ===================================================== 00:44:51.618 NVMe Controller at PCI bus 0, device 17, function 0 00:44:51.618 ===================================================== 00:44:51.618 Reservations: Not Supported 00:44:51.618 ===================================================== 00:44:51.618 NVMe Controller at PCI bus 0, device 19, function 0 00:44:51.618 ===================================================== 00:44:51.618 Reservations: Not Supported 00:44:51.618 ===================================================== 00:44:51.618 NVMe Controller at PCI bus 0, device 18, function 0 00:44:51.618 ===================================================== 00:44:51.618 Reservations: Not Supported 00:44:51.618 Reservation test passed 00:44:51.618 ************************************ 00:44:51.618 END TEST nvme_reserve 00:44:51.618 ************************************ 00:44:51.618 00:44:51.618 real 0m0.338s 00:44:51.618 user 0m0.120s 00:44:51.618 sys 0m0.171s 00:44:51.618 09:55:58 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:51.618 09:55:58 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:44:51.618 09:55:58 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:44:51.618 09:55:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:51.618 09:55:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:51.618 09:55:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:51.618 ************************************ 00:44:51.618 START TEST nvme_err_injection 00:44:51.618 ************************************ 00:44:51.618 09:55:58 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:44:52.185 NVMe Error Injection test 00:44:52.185 Attached to 0000:00:10.0 00:44:52.185 Attached to 0000:00:11.0 00:44:52.185 Attached to 0000:00:13.0 00:44:52.185 Attached to 0000:00:12.0 00:44:52.185 0000:00:10.0: get features failed as expected 00:44:52.185 0000:00:11.0: get features failed as expected 00:44:52.185 0000:00:13.0: get features failed as expected 00:44:52.185 0000:00:12.0: get features failed as expected 00:44:52.185 0000:00:12.0: get features successfully as expected 00:44:52.185 0000:00:10.0: get features successfully as expected 00:44:52.185 0000:00:11.0: get features successfully as expected 00:44:52.185 0000:00:13.0: get features successfully as expected 00:44:52.185 0000:00:11.0: read failed as expected 00:44:52.185 0000:00:13.0: read failed as expected 00:44:52.185 0000:00:12.0: read failed as expected 00:44:52.185 0000:00:10.0: read failed as expected 00:44:52.185 0000:00:11.0: read successfully as expected 00:44:52.185 0000:00:13.0: read successfully as expected 00:44:52.185 0000:00:10.0: read successfully as expected 00:44:52.185 0000:00:12.0: read successfully as expected 00:44:52.185 Cleaning up... 00:44:52.185 00:44:52.185 real 0m0.353s 00:44:52.185 user 0m0.131s 00:44:52.185 sys 0m0.172s 00:44:52.185 09:55:58 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:52.185 09:55:58 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:44:52.185 ************************************ 00:44:52.185 END TEST nvme_err_injection 00:44:52.185 ************************************ 00:44:52.185 09:55:58 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:44:52.185 09:55:58 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:44:52.185 09:55:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:52.185 09:55:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:52.185 ************************************ 00:44:52.185 START TEST nvme_overhead 00:44:52.185 ************************************ 00:44:52.185 09:55:58 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:44:53.564 Initializing NVMe Controllers 00:44:53.564 Attached to 0000:00:10.0 00:44:53.564 Attached to 0000:00:11.0 00:44:53.564 Attached to 0000:00:13.0 00:44:53.564 Attached to 0000:00:12.0 00:44:53.564 Initialization complete. Launching workers. 00:44:53.564 submit (in ns) avg, min, max = 15929.8, 13662.7, 59151.4 00:44:53.564 complete (in ns) avg, min, max = 10421.8, 9134.1, 132070.9 00:44:53.564 00:44:53.564 Submit histogram 00:44:53.564 ================ 00:44:53.564 Range in us Cumulative Count 00:44:53.564 13.615 - 13.673: 0.0108% ( 1) 00:44:53.564 14.196 - 14.255: 0.1403% ( 12) 00:44:53.564 14.255 - 14.313: 0.3993% ( 24) 00:44:53.564 14.313 - 14.371: 1.2949% ( 83) 00:44:53.564 14.371 - 14.429: 3.6366% ( 217) 00:44:53.564 14.429 - 14.487: 8.3198% ( 434) 00:44:53.564 14.487 - 14.545: 16.3160% ( 741) 00:44:53.564 14.545 - 14.604: 26.2005% ( 916) 00:44:53.564 14.604 - 14.662: 36.4411% ( 949) 00:44:53.564 14.662 - 14.720: 44.9984% ( 793) 00:44:53.564 14.720 - 14.778: 51.3435% ( 588) 00:44:53.564 14.778 - 14.836: 55.7570% ( 409) 00:44:53.564 14.836 - 14.895: 58.9619% ( 297) 00:44:53.564 14.895 - 15.011: 62.8682% ( 362) 00:44:53.564 15.011 - 15.127: 65.1020% ( 207) 00:44:53.564 15.127 - 15.244: 67.2170% ( 196) 00:44:53.564 15.244 - 15.360: 68.8896% ( 155) 00:44:53.564 15.360 - 15.476: 70.4867% ( 148) 00:44:53.564 15.476 - 15.593: 71.7384% ( 116) 00:44:53.564 15.593 - 15.709: 72.6665% ( 86) 00:44:53.564 15.709 - 15.825: 73.2276% ( 52) 00:44:53.564 15.825 - 15.942: 73.7024% ( 44) 00:44:53.564 15.942 - 16.058: 74.0261% ( 30) 00:44:53.564 16.058 - 16.175: 74.2959% ( 25) 00:44:53.564 16.175 - 16.291: 74.5872% ( 27) 00:44:53.564 16.291 - 16.407: 74.7707% ( 17) 00:44:53.564 16.407 - 16.524: 74.9002% ( 12) 00:44:53.564 16.524 - 16.640: 74.9541% ( 5) 00:44:53.564 16.640 - 16.756: 75.0189% ( 6) 00:44:53.564 16.756 - 16.873: 75.0836% ( 6) 00:44:53.564 16.873 - 16.989: 75.1484% ( 6) 00:44:53.564 16.989 - 17.105: 75.1915% ( 4) 00:44:53.564 17.105 - 17.222: 75.2239% ( 3) 00:44:53.564 17.222 - 17.338: 75.3102% ( 8) 00:44:53.564 17.338 - 17.455: 75.6771% ( 34) 00:44:53.564 17.455 - 17.571: 77.9432% ( 210) 00:44:53.564 17.571 - 17.687: 82.0006% ( 376) 00:44:53.564 17.687 - 17.804: 85.6372% ( 337) 00:44:53.564 17.804 - 17.920: 87.4825% ( 171) 00:44:53.564 17.920 - 18.036: 88.8529% ( 127) 00:44:53.564 18.036 - 18.153: 89.8133% ( 89) 00:44:53.564 18.153 - 18.269: 90.6011% ( 73) 00:44:53.564 18.269 - 18.385: 91.2485% ( 60) 00:44:53.564 18.385 - 18.502: 91.8960% ( 60) 00:44:53.564 18.502 - 18.618: 92.4247% ( 49) 00:44:53.564 18.618 - 18.735: 92.7269% ( 28) 00:44:53.564 18.735 - 18.851: 92.9859% ( 24) 00:44:53.564 18.851 - 18.967: 93.1477% ( 15) 00:44:53.564 18.967 - 19.084: 93.3635% ( 20) 00:44:53.564 19.084 - 19.200: 93.5038% ( 13) 00:44:53.564 19.200 - 19.316: 93.6441% ( 13) 00:44:53.564 19.316 - 19.433: 93.6981% ( 5) 00:44:53.565 19.433 - 19.549: 93.7952% ( 9) 00:44:53.565 19.549 - 19.665: 93.9139% ( 11) 00:44:53.565 19.665 - 19.782: 93.9571% ( 4) 00:44:53.565 19.782 - 19.898: 94.0326% ( 7) 00:44:53.565 19.898 - 20.015: 94.1081% ( 7) 00:44:53.565 20.015 - 20.131: 94.1513% ( 4) 00:44:53.565 20.131 - 20.247: 94.2484% ( 9) 00:44:53.565 20.247 - 20.364: 94.3239% ( 7) 00:44:53.565 20.364 - 20.480: 94.4858% ( 15) 00:44:53.565 20.480 - 20.596: 94.6369% ( 14) 00:44:53.565 20.596 - 20.713: 94.8419% ( 19) 00:44:53.565 20.713 - 20.829: 95.0038% ( 15) 00:44:53.565 20.829 - 20.945: 95.1764% ( 16) 00:44:53.565 20.945 - 21.062: 95.3599% ( 17) 00:44:53.565 21.062 - 21.178: 95.4894% ( 12) 00:44:53.565 21.178 - 21.295: 95.5973% ( 10) 00:44:53.565 21.295 - 21.411: 95.7268% ( 12) 00:44:53.565 21.411 - 21.527: 95.8131% ( 8) 00:44:53.565 21.527 - 21.644: 95.9210% ( 10) 00:44:53.565 21.644 - 21.760: 96.0613% ( 13) 00:44:53.565 21.760 - 21.876: 96.1045% ( 4) 00:44:53.565 21.876 - 21.993: 96.1584% ( 5) 00:44:53.565 21.993 - 22.109: 96.2339% ( 7) 00:44:53.565 22.109 - 22.225: 96.3311% ( 9) 00:44:53.565 22.225 - 22.342: 96.3634% ( 3) 00:44:53.565 22.342 - 22.458: 96.4498% ( 8) 00:44:53.565 22.458 - 22.575: 96.5793% ( 12) 00:44:53.565 22.575 - 22.691: 96.6656% ( 8) 00:44:53.565 22.691 - 22.807: 96.7519% ( 8) 00:44:53.565 22.807 - 22.924: 96.8382% ( 8) 00:44:53.565 22.924 - 23.040: 96.9246% ( 8) 00:44:53.565 23.040 - 23.156: 97.0541% ( 12) 00:44:53.565 23.156 - 23.273: 97.1620% ( 10) 00:44:53.565 23.273 - 23.389: 97.2591% ( 9) 00:44:53.565 23.389 - 23.505: 97.3346% ( 7) 00:44:53.565 23.505 - 23.622: 97.4102% ( 7) 00:44:53.565 23.622 - 23.738: 97.4641% ( 5) 00:44:53.565 23.738 - 23.855: 97.4965% ( 3) 00:44:53.565 23.855 - 23.971: 97.5828% ( 8) 00:44:53.565 23.971 - 24.087: 97.6799% ( 9) 00:44:53.565 24.087 - 24.204: 97.7447% ( 6) 00:44:53.565 24.204 - 24.320: 97.8418% ( 9) 00:44:53.565 24.320 - 24.436: 97.9173% ( 7) 00:44:53.565 24.436 - 24.553: 98.0468% ( 12) 00:44:53.565 24.553 - 24.669: 98.0792% ( 3) 00:44:53.565 24.669 - 24.785: 98.1763% ( 9) 00:44:53.565 24.785 - 24.902: 98.3058% ( 12) 00:44:53.565 24.902 - 25.018: 98.3490% ( 4) 00:44:53.565 25.018 - 25.135: 98.3814% ( 3) 00:44:53.565 25.135 - 25.251: 98.4461% ( 6) 00:44:53.565 25.251 - 25.367: 98.5540% ( 10) 00:44:53.565 25.367 - 25.484: 98.6080% ( 5) 00:44:53.565 25.484 - 25.600: 98.6619% ( 5) 00:44:53.565 25.600 - 25.716: 98.6943% ( 3) 00:44:53.565 25.716 - 25.833: 98.7698% ( 7) 00:44:53.565 25.833 - 25.949: 98.8022% ( 3) 00:44:53.565 25.949 - 26.065: 98.8562% ( 5) 00:44:53.565 26.065 - 26.182: 98.8885% ( 3) 00:44:53.565 26.182 - 26.298: 98.9317% ( 4) 00:44:53.565 26.531 - 26.647: 98.9749% ( 4) 00:44:53.565 26.647 - 26.764: 99.0180% ( 4) 00:44:53.565 26.764 - 26.880: 99.0612% ( 4) 00:44:53.565 26.880 - 26.996: 99.0936% ( 3) 00:44:53.565 26.996 - 27.113: 99.1367% ( 4) 00:44:53.565 27.113 - 27.229: 99.1907% ( 5) 00:44:53.565 27.345 - 27.462: 99.2015% ( 1) 00:44:53.565 27.462 - 27.578: 99.2338% ( 3) 00:44:53.565 27.578 - 27.695: 99.2770% ( 4) 00:44:53.565 27.695 - 27.811: 99.2878% ( 1) 00:44:53.565 27.811 - 27.927: 99.3202% ( 3) 00:44:53.565 27.927 - 28.044: 99.3849% ( 6) 00:44:53.565 28.044 - 28.160: 99.3957% ( 1) 00:44:53.565 28.160 - 28.276: 99.4173% ( 2) 00:44:53.565 28.276 - 28.393: 99.4281% ( 1) 00:44:53.565 28.393 - 28.509: 99.4605% ( 3) 00:44:53.565 28.509 - 28.625: 99.4712% ( 1) 00:44:53.565 28.742 - 28.858: 99.5036% ( 3) 00:44:53.565 28.858 - 28.975: 99.5252% ( 2) 00:44:53.565 28.975 - 29.091: 99.5468% ( 2) 00:44:53.565 29.091 - 29.207: 99.5576% ( 1) 00:44:53.565 29.324 - 29.440: 99.6007% ( 4) 00:44:53.565 29.440 - 29.556: 99.6115% ( 1) 00:44:53.565 29.673 - 29.789: 99.6223% ( 1) 00:44:53.565 29.789 - 30.022: 99.6331% ( 1) 00:44:53.565 30.022 - 30.255: 99.6655% ( 3) 00:44:53.565 30.255 - 30.487: 99.6763% ( 1) 00:44:53.565 30.487 - 30.720: 99.6979% ( 2) 00:44:53.565 30.720 - 30.953: 99.7302% ( 3) 00:44:53.565 30.953 - 31.185: 99.7518% ( 2) 00:44:53.565 31.185 - 31.418: 99.7626% ( 1) 00:44:53.565 31.418 - 31.651: 99.7734% ( 1) 00:44:53.565 32.116 - 32.349: 99.7842% ( 1) 00:44:53.565 32.815 - 33.047: 99.7950% ( 1) 00:44:53.565 33.047 - 33.280: 99.8058% ( 1) 00:44:53.565 33.513 - 33.745: 99.8273% ( 2) 00:44:53.565 33.745 - 33.978: 99.8381% ( 1) 00:44:53.565 33.978 - 34.211: 99.8489% ( 1) 00:44:53.565 34.211 - 34.444: 99.8597% ( 1) 00:44:53.565 35.840 - 36.073: 99.8705% ( 1) 00:44:53.565 36.305 - 36.538: 99.8813% ( 1) 00:44:53.565 37.236 - 37.469: 99.8921% ( 1) 00:44:53.565 37.469 - 37.702: 99.9029% ( 1) 00:44:53.565 38.633 - 38.865: 99.9137% ( 1) 00:44:53.565 41.193 - 41.425: 99.9245% ( 1) 00:44:53.565 45.382 - 45.615: 99.9353% ( 1) 00:44:53.565 47.011 - 47.244: 99.9460% ( 1) 00:44:53.565 47.476 - 47.709: 99.9568% ( 1) 00:44:53.565 50.502 - 50.735: 99.9676% ( 1) 00:44:53.565 52.829 - 53.062: 99.9784% ( 1) 00:44:53.565 57.716 - 57.949: 99.9892% ( 1) 00:44:53.565 59.113 - 59.345: 100.0000% ( 1) 00:44:53.565 00:44:53.565 Complete histogram 00:44:53.565 ================== 00:44:53.565 Range in us Cumulative Count 00:44:53.565 9.076 - 9.135: 0.0108% ( 1) 00:44:53.565 9.135 - 9.193: 0.3561% ( 32) 00:44:53.565 9.193 - 9.251: 3.0538% ( 250) 00:44:53.565 9.251 - 9.309: 11.4492% ( 778) 00:44:53.565 9.309 - 9.367: 25.6070% ( 1312) 00:44:53.565 9.367 - 9.425: 41.4050% ( 1464) 00:44:53.565 9.425 - 9.484: 53.2859% ( 1101) 00:44:53.565 9.484 - 9.542: 59.6633% ( 591) 00:44:53.565 9.542 - 9.600: 63.1488% ( 323) 00:44:53.565 9.600 - 9.658: 65.0804% ( 179) 00:44:53.565 9.658 - 9.716: 66.2242% ( 106) 00:44:53.565 9.716 - 9.775: 67.0444% ( 76) 00:44:53.565 9.775 - 9.833: 67.5299% ( 45) 00:44:53.565 9.833 - 9.891: 67.8968% ( 34) 00:44:53.565 9.891 - 9.949: 68.0587% ( 15) 00:44:53.565 9.949 - 10.007: 68.1342% ( 7) 00:44:53.565 10.007 - 10.065: 68.1774% ( 4) 00:44:53.565 10.065 - 10.124: 68.2421% ( 6) 00:44:53.565 10.124 - 10.182: 68.3177% ( 7) 00:44:53.565 10.182 - 10.240: 68.4472% ( 12) 00:44:53.565 10.240 - 10.298: 68.7925% ( 32) 00:44:53.565 10.298 - 10.356: 69.3320% ( 50) 00:44:53.565 10.356 - 10.415: 69.8392% ( 47) 00:44:53.565 10.415 - 10.473: 70.6701% ( 77) 00:44:53.565 10.473 - 10.531: 71.3607% ( 64) 00:44:53.565 10.531 - 10.589: 71.9327% ( 53) 00:44:53.565 10.589 - 10.647: 72.5046% ( 53) 00:44:53.565 10.647 - 10.705: 72.9362% ( 40) 00:44:53.565 10.705 - 10.764: 73.2707% ( 31) 00:44:53.565 10.764 - 10.822: 73.5405% ( 25) 00:44:53.565 10.822 - 10.880: 73.8427% ( 28) 00:44:53.565 10.880 - 10.938: 74.0909% ( 23) 00:44:53.565 10.938 - 10.996: 74.2743% ( 17) 00:44:53.565 10.996 - 11.055: 74.5117% ( 22) 00:44:53.565 11.055 - 11.113: 74.5980% ( 8) 00:44:53.565 11.113 - 11.171: 74.6628% ( 6) 00:44:53.565 11.171 - 11.229: 74.7491% ( 8) 00:44:53.565 11.229 - 11.287: 74.7707% ( 2) 00:44:53.565 11.287 - 11.345: 74.8354% ( 6) 00:44:53.565 11.345 - 11.404: 74.9002% ( 6) 00:44:53.565 11.404 - 11.462: 74.9865% ( 8) 00:44:53.565 11.462 - 11.520: 75.2023% ( 20) 00:44:53.565 11.520 - 11.578: 75.9901% ( 73) 00:44:53.565 11.578 - 11.636: 78.3425% ( 218) 00:44:53.565 11.636 - 11.695: 81.7525% ( 316) 00:44:53.565 11.695 - 11.753: 85.0005% ( 301) 00:44:53.565 11.753 - 11.811: 87.7307% ( 253) 00:44:53.565 11.811 - 11.869: 89.5867% ( 172) 00:44:53.565 11.869 - 11.927: 90.5687% ( 91) 00:44:53.565 11.927 - 11.985: 91.1730% ( 56) 00:44:53.565 11.985 - 12.044: 91.5507% ( 35) 00:44:53.565 12.044 - 12.102: 91.8960% ( 32) 00:44:53.565 12.102 - 12.160: 92.1334% ( 22) 00:44:53.565 12.160 - 12.218: 92.2737% ( 13) 00:44:53.565 12.218 - 12.276: 92.3816% ( 10) 00:44:53.565 12.276 - 12.335: 92.4787% ( 9) 00:44:53.565 12.335 - 12.393: 92.5326% ( 5) 00:44:53.565 12.393 - 12.451: 92.5866% ( 5) 00:44:53.565 12.451 - 12.509: 92.6190% ( 3) 00:44:53.565 12.509 - 12.567: 92.6406% ( 2) 00:44:53.565 12.567 - 12.625: 92.6945% ( 5) 00:44:53.565 12.625 - 12.684: 92.9427% ( 23) 00:44:53.565 12.684 - 12.742: 93.2772% ( 31) 00:44:53.565 12.742 - 12.800: 93.4822% ( 19) 00:44:53.565 12.800 - 12.858: 93.7412% ( 24) 00:44:53.565 12.858 - 12.916: 93.9463% ( 19) 00:44:53.565 12.916 - 12.975: 94.1837% ( 22) 00:44:53.565 12.975 - 13.033: 94.3563% ( 16) 00:44:53.565 13.033 - 13.091: 94.4966% ( 13) 00:44:53.565 13.091 - 13.149: 94.6153% ( 11) 00:44:53.565 13.149 - 13.207: 94.7664% ( 14) 00:44:53.565 13.207 - 13.265: 94.8959% ( 12) 00:44:53.566 13.265 - 13.324: 94.9498% ( 5) 00:44:53.566 13.324 - 13.382: 95.0146% ( 6) 00:44:53.566 13.382 - 13.440: 95.1117% ( 9) 00:44:53.566 13.440 - 13.498: 95.2088% ( 9) 00:44:53.566 13.498 - 13.556: 95.2736% ( 6) 00:44:53.566 13.556 - 13.615: 95.3815% ( 10) 00:44:53.566 13.615 - 13.673: 95.4246% ( 4) 00:44:53.566 13.673 - 13.731: 95.4678% ( 4) 00:44:53.566 13.731 - 13.789: 95.5217% ( 5) 00:44:53.566 13.789 - 13.847: 95.5541% ( 3) 00:44:53.566 13.847 - 13.905: 95.5757% ( 2) 00:44:53.566 13.905 - 13.964: 95.6512% ( 7) 00:44:53.566 13.964 - 14.022: 95.7160% ( 6) 00:44:53.566 14.022 - 14.080: 95.7591% ( 4) 00:44:53.566 14.080 - 14.138: 95.8455% ( 8) 00:44:53.566 14.138 - 14.196: 95.9210% ( 7) 00:44:53.566 14.196 - 14.255: 95.9426% ( 2) 00:44:53.566 14.255 - 14.313: 95.9858% ( 4) 00:44:53.566 14.313 - 14.371: 96.0505% ( 6) 00:44:53.566 14.371 - 14.429: 96.1368% ( 8) 00:44:53.566 14.429 - 14.487: 96.1908% ( 5) 00:44:53.566 14.487 - 14.545: 96.2663% ( 7) 00:44:53.566 14.545 - 14.604: 96.3203% ( 5) 00:44:53.566 14.604 - 14.662: 96.3526% ( 3) 00:44:53.566 14.662 - 14.720: 96.3742% ( 2) 00:44:53.566 14.720 - 14.778: 96.3850% ( 1) 00:44:53.566 14.778 - 14.836: 96.4174% ( 3) 00:44:53.566 14.836 - 14.895: 96.4713% ( 5) 00:44:53.566 14.895 - 15.011: 96.5577% ( 8) 00:44:53.566 15.011 - 15.127: 96.6548% ( 9) 00:44:53.566 15.127 - 15.244: 96.6872% ( 3) 00:44:53.566 15.244 - 15.360: 96.7088% ( 2) 00:44:53.566 15.360 - 15.476: 96.7735% ( 6) 00:44:53.566 15.476 - 15.593: 96.8706% ( 9) 00:44:53.566 15.593 - 15.709: 96.9677% ( 9) 00:44:53.566 15.709 - 15.825: 97.0756% ( 10) 00:44:53.566 15.825 - 15.942: 97.1404% ( 6) 00:44:53.566 15.942 - 16.058: 97.2375% ( 9) 00:44:53.566 16.058 - 16.175: 97.3238% ( 8) 00:44:53.566 16.175 - 16.291: 97.4102% ( 8) 00:44:53.566 16.291 - 16.407: 97.4857% ( 7) 00:44:53.566 16.407 - 16.524: 97.5181% ( 3) 00:44:53.566 16.524 - 16.640: 97.5720% ( 5) 00:44:53.566 16.640 - 16.756: 97.6260% ( 5) 00:44:53.566 16.756 - 16.873: 97.6691% ( 4) 00:44:53.566 16.873 - 16.989: 97.7555% ( 8) 00:44:53.566 16.989 - 17.105: 97.7986% ( 4) 00:44:53.566 17.105 - 17.222: 97.8634% ( 6) 00:44:53.566 17.222 - 17.338: 97.9389% ( 7) 00:44:53.566 17.338 - 17.455: 98.0360% ( 9) 00:44:53.566 17.455 - 17.571: 98.1332% ( 9) 00:44:53.566 17.571 - 17.687: 98.2303% ( 9) 00:44:53.566 17.687 - 17.804: 98.3274% ( 9) 00:44:53.566 17.804 - 17.920: 98.3490% ( 2) 00:44:53.566 17.920 - 18.036: 98.3706% ( 2) 00:44:53.566 18.036 - 18.153: 98.4137% ( 4) 00:44:53.566 18.153 - 18.269: 98.4677% ( 5) 00:44:53.566 18.269 - 18.385: 98.5001% ( 3) 00:44:53.566 18.385 - 18.502: 98.5432% ( 4) 00:44:53.566 18.502 - 18.618: 98.6080% ( 6) 00:44:53.566 18.618 - 18.735: 98.6295% ( 2) 00:44:53.566 18.735 - 18.851: 98.6619% ( 3) 00:44:53.566 18.851 - 18.967: 98.6943% ( 3) 00:44:53.566 18.967 - 19.084: 98.7375% ( 4) 00:44:53.566 19.084 - 19.200: 98.7914% ( 5) 00:44:53.566 19.200 - 19.316: 98.8346% ( 4) 00:44:53.566 19.316 - 19.433: 98.8777% ( 4) 00:44:53.566 19.433 - 19.549: 98.9425% ( 6) 00:44:53.566 19.549 - 19.665: 98.9641% ( 2) 00:44:53.566 19.665 - 19.782: 99.0180% ( 5) 00:44:53.566 19.782 - 19.898: 99.0504% ( 3) 00:44:53.566 19.898 - 20.015: 99.0612% ( 1) 00:44:53.566 20.015 - 20.131: 99.1043% ( 4) 00:44:53.566 20.131 - 20.247: 99.1691% ( 6) 00:44:53.566 20.247 - 20.364: 99.2123% ( 4) 00:44:53.566 20.364 - 20.480: 99.2230% ( 1) 00:44:53.566 20.480 - 20.596: 99.2554% ( 3) 00:44:53.566 20.596 - 20.713: 99.2662% ( 1) 00:44:53.566 20.713 - 20.829: 99.2770% ( 1) 00:44:53.566 20.945 - 21.062: 99.2878% ( 1) 00:44:53.566 21.062 - 21.178: 99.3310% ( 4) 00:44:53.566 21.178 - 21.295: 99.3525% ( 2) 00:44:53.566 21.295 - 21.411: 99.3741% ( 2) 00:44:53.566 21.411 - 21.527: 99.3957% ( 2) 00:44:53.566 21.527 - 21.644: 99.4281% ( 3) 00:44:53.566 21.644 - 21.760: 99.4605% ( 3) 00:44:53.566 21.760 - 21.876: 99.4820% ( 2) 00:44:53.566 21.876 - 21.993: 99.5036% ( 2) 00:44:53.566 21.993 - 22.109: 99.5252% ( 2) 00:44:53.566 22.109 - 22.225: 99.5576% ( 3) 00:44:53.566 22.225 - 22.342: 99.5792% ( 2) 00:44:53.566 22.458 - 22.575: 99.6007% ( 2) 00:44:53.566 22.575 - 22.691: 99.6115% ( 1) 00:44:53.566 22.691 - 22.807: 99.6223% ( 1) 00:44:53.566 22.807 - 22.924: 99.6655% ( 4) 00:44:53.566 22.924 - 23.040: 99.6763% ( 1) 00:44:53.566 23.040 - 23.156: 99.6871% ( 1) 00:44:53.566 23.156 - 23.273: 99.6979% ( 1) 00:44:53.566 23.389 - 23.505: 99.7086% ( 1) 00:44:53.566 23.622 - 23.738: 99.7194% ( 1) 00:44:53.566 23.738 - 23.855: 99.7302% ( 1) 00:44:53.566 23.855 - 23.971: 99.7410% ( 1) 00:44:53.566 23.971 - 24.087: 99.7518% ( 1) 00:44:53.566 24.553 - 24.669: 99.7626% ( 1) 00:44:53.566 25.018 - 25.135: 99.7734% ( 1) 00:44:53.566 25.135 - 25.251: 99.7842% ( 1) 00:44:53.566 26.065 - 26.182: 99.7950% ( 1) 00:44:53.566 26.531 - 26.647: 99.8058% ( 1) 00:44:53.566 26.880 - 26.996: 99.8166% ( 1) 00:44:53.566 27.229 - 27.345: 99.8273% ( 1) 00:44:53.566 27.462 - 27.578: 99.8381% ( 1) 00:44:53.566 27.578 - 27.695: 99.8489% ( 1) 00:44:53.566 27.811 - 27.927: 99.8597% ( 1) 00:44:53.566 28.276 - 28.393: 99.8705% ( 1) 00:44:53.566 33.978 - 34.211: 99.8813% ( 1) 00:44:53.566 34.211 - 34.444: 99.9029% ( 2) 00:44:53.566 34.676 - 34.909: 99.9137% ( 1) 00:44:53.566 34.909 - 35.142: 99.9245% ( 1) 00:44:53.566 40.960 - 41.193: 99.9353% ( 1) 00:44:53.566 53.993 - 54.225: 99.9460% ( 1) 00:44:53.566 57.716 - 57.949: 99.9568% ( 1) 00:44:53.566 60.975 - 61.440: 99.9676% ( 1) 00:44:53.566 62.836 - 63.302: 99.9784% ( 1) 00:44:53.566 66.095 - 66.560: 99.9892% ( 1) 00:44:53.566 131.258 - 132.189: 100.0000% ( 1) 00:44:53.566 00:44:53.566 00:44:53.566 real 0m1.339s 00:44:53.566 user 0m1.120s 00:44:53.566 sys 0m0.167s 00:44:53.566 09:56:00 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:53.566 ************************************ 00:44:53.566 END TEST nvme_overhead 00:44:53.566 ************************************ 00:44:53.566 09:56:00 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:44:53.566 09:56:00 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:44:53.566 09:56:00 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:44:53.566 09:56:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:53.566 09:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:53.566 ************************************ 00:44:53.566 START TEST nvme_arbitration 00:44:53.566 ************************************ 00:44:53.566 09:56:00 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:44:57.758 Initializing NVMe Controllers 00:44:57.758 Attached to 0000:00:10.0 00:44:57.758 Attached to 0000:00:11.0 00:44:57.758 Attached to 0000:00:13.0 00:44:57.758 Attached to 0000:00:12.0 00:44:57.758 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:44:57.758 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:44:57.758 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:44:57.758 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:44:57.758 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:44:57.758 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:44:57.758 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:44:57.758 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:44:57.758 Initialization complete. Launching workers. 00:44:57.758 Starting thread on core 1 with urgent priority queue 00:44:57.758 Starting thread on core 2 with urgent priority queue 00:44:57.758 Starting thread on core 3 with urgent priority queue 00:44:57.758 Starting thread on core 0 with urgent priority queue 00:44:57.758 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:44:57.758 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:44:57.758 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:44:57.758 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:44:57.758 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:44:57.758 QEMU NVMe Ctrl (12342 ) core 3: 725.33 IO/s 137.87 secs/100000 ios 00:44:57.758 ======================================================== 00:44:57.758 00:44:57.758 00:44:57.758 real 0m3.549s 00:44:57.758 user 0m9.547s 00:44:57.758 sys 0m0.196s 00:44:57.758 09:56:03 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:57.758 ************************************ 00:44:57.758 END TEST nvme_arbitration 00:44:57.758 ************************************ 00:44:57.758 09:56:03 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:44:57.758 09:56:03 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:44:57.758 09:56:03 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:44:57.758 09:56:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:57.758 09:56:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:57.758 ************************************ 00:44:57.758 START TEST nvme_single_aen 00:44:57.758 ************************************ 00:44:57.758 09:56:03 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:44:57.758 Asynchronous Event Request test 00:44:57.758 Attached to 0000:00:10.0 00:44:57.758 Attached to 0000:00:11.0 00:44:57.758 Attached to 0000:00:13.0 00:44:57.758 Attached to 0000:00:12.0 00:44:57.758 Reset controller to setup AER completions for this process 00:44:57.758 Registering asynchronous event callbacks... 00:44:57.758 Getting orig temperature thresholds of all controllers 00:44:57.758 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:57.758 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:57.758 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:57.758 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:57.758 Setting all controllers temperature threshold low to trigger AER 00:44:57.758 Waiting for all controllers temperature threshold to be set lower 00:44:57.758 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:57.758 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:44:57.758 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:57.758 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:44:57.758 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:57.758 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:44:57.758 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:57.758 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:44:57.758 Waiting for all controllers to trigger AER and reset threshold 00:44:57.758 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:57.758 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:57.758 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:57.758 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:57.758 Cleaning up... 00:44:57.758 00:44:57.758 real 0m0.305s 00:44:57.758 user 0m0.130s 00:44:57.758 sys 0m0.128s 00:44:57.758 ************************************ 00:44:57.758 END TEST nvme_single_aen 00:44:57.758 ************************************ 00:44:57.758 09:56:04 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:57.758 09:56:04 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:44:57.758 09:56:04 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:44:57.758 09:56:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:57.758 09:56:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:57.758 09:56:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:57.758 ************************************ 00:44:57.758 START TEST nvme_doorbell_aers 00:44:57.758 ************************************ 00:44:57.758 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:44:57.758 09:56:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:44:57.758 09:56:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:44:57.759 09:56:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:57.759 [2024-12-09 09:56:04.729542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:07.729 Executing: test_write_invalid_db 00:45:07.729 Waiting for AER completion... 00:45:07.729 Failure: test_write_invalid_db 00:45:07.729 00:45:07.729 Executing: test_invalid_db_write_overflow_sq 00:45:07.729 Waiting for AER completion... 00:45:07.729 Failure: test_invalid_db_write_overflow_sq 00:45:07.729 00:45:07.729 Executing: test_invalid_db_write_overflow_cq 00:45:07.729 Waiting for AER completion... 00:45:07.729 Failure: test_invalid_db_write_overflow_cq 00:45:07.729 00:45:07.729 09:56:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:45:07.729 09:56:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:45:07.988 [2024-12-09 09:56:14.802518] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:17.954 Executing: test_write_invalid_db 00:45:17.954 Waiting for AER completion... 00:45:17.954 Failure: test_write_invalid_db 00:45:17.954 00:45:17.954 Executing: test_invalid_db_write_overflow_sq 00:45:17.954 Waiting for AER completion... 00:45:17.954 Failure: test_invalid_db_write_overflow_sq 00:45:17.954 00:45:17.954 Executing: test_invalid_db_write_overflow_cq 00:45:17.954 Waiting for AER completion... 00:45:17.954 Failure: test_invalid_db_write_overflow_cq 00:45:17.954 00:45:17.954 09:56:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:45:17.954 09:56:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:45:17.954 [2024-12-09 09:56:24.848535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:27.959 Executing: test_write_invalid_db 00:45:27.959 Waiting for AER completion... 00:45:27.959 Failure: test_write_invalid_db 00:45:27.959 00:45:27.959 Executing: test_invalid_db_write_overflow_sq 00:45:27.959 Waiting for AER completion... 00:45:27.959 Failure: test_invalid_db_write_overflow_sq 00:45:27.959 00:45:27.959 Executing: test_invalid_db_write_overflow_cq 00:45:27.959 Waiting for AER completion... 00:45:27.959 Failure: test_invalid_db_write_overflow_cq 00:45:27.959 00:45:27.959 09:56:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:45:27.959 09:56:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:45:27.959 [2024-12-09 09:56:34.880973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 Executing: test_write_invalid_db 00:45:37.985 Waiting for AER completion... 00:45:37.985 Failure: test_write_invalid_db 00:45:37.985 00:45:37.985 Executing: test_invalid_db_write_overflow_sq 00:45:37.985 Waiting for AER completion... 00:45:37.985 Failure: test_invalid_db_write_overflow_sq 00:45:37.985 00:45:37.985 Executing: test_invalid_db_write_overflow_cq 00:45:37.985 Waiting for AER completion... 00:45:37.985 Failure: test_invalid_db_write_overflow_cq 00:45:37.985 00:45:37.985 ************************************ 00:45:37.985 END TEST nvme_doorbell_aers 00:45:37.985 ************************************ 00:45:37.985 00:45:37.985 real 0m40.267s 00:45:37.985 user 0m34.120s 00:45:37.985 sys 0m5.756s 00:45:37.985 09:56:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:37.985 09:56:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:45:37.985 09:56:44 nvme -- nvme/nvme.sh@97 -- # uname 00:45:37.985 09:56:44 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:45:37.985 09:56:44 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:45:37.985 09:56:44 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:45:37.985 09:56:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:37.985 09:56:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:37.985 ************************************ 00:45:37.985 START TEST nvme_multi_aen 00:45:37.985 ************************************ 00:45:37.985 09:56:44 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:45:37.985 [2024-12-09 09:56:44.958411] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.958507] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.958529] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.960437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.960488] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.960505] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.961921] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.961968] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.961998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.963810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.964069] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 [2024-12-09 09:56:44.964230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64982) is not found. Dropping the request. 00:45:37.985 Child process pid: 65496 00:45:38.560 [Child] Asynchronous Event Request test 00:45:38.560 [Child] Attached to 0000:00:10.0 00:45:38.560 [Child] Attached to 0000:00:11.0 00:45:38.560 [Child] Attached to 0000:00:13.0 00:45:38.560 [Child] Attached to 0000:00:12.0 00:45:38.560 [Child] Registering asynchronous event callbacks... 00:45:38.560 [Child] Getting orig temperature thresholds of all controllers 00:45:38.560 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 [Child] Waiting for all controllers to trigger AER and reset threshold 00:45:38.560 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 [Child] Cleaning up... 00:45:38.560 Asynchronous Event Request test 00:45:38.560 Attached to 0000:00:10.0 00:45:38.560 Attached to 0000:00:11.0 00:45:38.560 Attached to 0000:00:13.0 00:45:38.560 Attached to 0000:00:12.0 00:45:38.560 Reset controller to setup AER completions for this process 00:45:38.560 Registering asynchronous event callbacks... 00:45:38.560 Getting orig temperature thresholds of all controllers 00:45:38.560 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:45:38.560 Setting all controllers temperature threshold low to trigger AER 00:45:38.560 Waiting for all controllers temperature threshold to be set lower 00:45:38.560 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:45:38.560 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:45:38.560 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:45:38.560 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:45:38.560 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:45:38.560 Waiting for all controllers to trigger AER and reset threshold 00:45:38.560 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:45:38.560 Cleaning up... 00:45:38.560 00:45:38.560 real 0m0.690s 00:45:38.560 user 0m0.277s 00:45:38.560 sys 0m0.308s 00:45:38.560 09:56:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:38.560 ************************************ 00:45:38.560 END TEST nvme_multi_aen 00:45:38.560 ************************************ 00:45:38.560 09:56:45 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:45:38.560 09:56:45 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:45:38.560 09:56:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:38.560 09:56:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:38.560 09:56:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:38.560 ************************************ 00:45:38.560 START TEST nvme_startup 00:45:38.560 ************************************ 00:45:38.560 09:56:45 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:45:38.819 Initializing NVMe Controllers 00:45:38.819 Attached to 0000:00:10.0 00:45:38.819 Attached to 0000:00:11.0 00:45:38.819 Attached to 0000:00:13.0 00:45:38.819 Attached to 0000:00:12.0 00:45:38.819 Initialization complete. 00:45:38.819 Time used:238678.281 (us). 00:45:38.819 00:45:38.819 real 0m0.336s 00:45:38.819 user 0m0.127s 00:45:38.819 sys 0m0.161s 00:45:38.819 09:56:45 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:38.819 09:56:45 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:45:38.819 ************************************ 00:45:38.819 END TEST nvme_startup 00:45:38.819 ************************************ 00:45:38.819 09:56:45 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:45:38.819 09:56:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:38.819 09:56:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:38.819 09:56:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:38.819 ************************************ 00:45:38.819 START TEST nvme_multi_secondary 00:45:38.819 ************************************ 00:45:38.819 09:56:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:45:38.819 09:56:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65555 00:45:38.819 09:56:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:45:38.819 09:56:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65556 00:45:38.819 09:56:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:45:38.819 09:56:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:45:43.009 Initializing NVMe Controllers 00:45:43.009 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:43.009 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:45:43.009 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:45:43.009 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:45:43.009 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:45:43.009 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:45:43.009 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:45:43.009 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:45:43.009 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:45:43.009 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:45:43.009 Initialization complete. Launching workers. 00:45:43.009 ======================================================== 00:45:43.009 Latency(us) 00:45:43.009 Device Information : IOPS MiB/s Average min max 00:45:43.009 PCIE (0000:00:10.0) NSID 1 from core 2: 2222.76 8.68 7195.04 1961.26 15322.99 00:45:43.009 PCIE (0000:00:11.0) NSID 1 from core 2: 2222.76 8.68 7197.02 2171.59 15471.06 00:45:43.009 PCIE (0000:00:13.0) NSID 1 from core 2: 2222.76 8.68 7197.94 1887.75 15840.59 00:45:43.009 PCIE (0000:00:12.0) NSID 1 from core 2: 2222.76 8.68 7198.26 2071.78 13740.26 00:45:43.009 PCIE (0000:00:12.0) NSID 2 from core 2: 2222.76 8.68 7196.49 2079.09 16725.88 00:45:43.009 PCIE (0000:00:12.0) NSID 3 from core 2: 2222.76 8.68 7198.21 1893.67 16610.72 00:45:43.009 ======================================================== 00:45:43.009 Total : 13336.53 52.10 7197.16 1887.75 16725.88 00:45:43.009 00:45:43.009 Initializing NVMe Controllers 00:45:43.009 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:43.009 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:45:43.009 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:45:43.009 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:45:43.009 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:45:43.009 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:45:43.009 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:45:43.009 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:45:43.009 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:45:43.009 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:45:43.009 Initialization complete. Launching workers. 00:45:43.009 ======================================================== 00:45:43.009 Latency(us) 00:45:43.009 Device Information : IOPS MiB/s Average min max 00:45:43.009 PCIE (0000:00:10.0) NSID 1 from core 1: 4984.50 19.47 3207.86 1510.73 11346.18 00:45:43.009 PCIE (0000:00:11.0) NSID 1 from core 1: 4984.50 19.47 3209.55 1453.60 11644.24 00:45:43.009 PCIE (0000:00:13.0) NSID 1 from core 1: 4984.50 19.47 3209.54 1336.05 11926.90 00:45:43.009 PCIE (0000:00:12.0) NSID 1 from core 1: 4984.50 19.47 3209.63 1415.89 12162.37 00:45:43.009 PCIE (0000:00:12.0) NSID 2 from core 1: 4984.50 19.47 3209.59 1512.16 11590.18 00:45:43.009 PCIE (0000:00:12.0) NSID 3 from core 1: 4984.50 19.47 3209.51 1521.40 11638.81 00:45:43.009 ======================================================== 00:45:43.009 Total : 29906.99 116.82 3209.28 1336.05 12162.37 00:45:43.009 00:45:43.009 09:56:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65555 00:45:44.390 Initializing NVMe Controllers 00:45:44.390 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:44.390 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:45:44.390 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:45:44.390 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:45:44.390 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:45:44.390 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:45:44.390 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:45:44.390 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:45:44.390 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:45:44.390 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:45:44.390 Initialization complete. Launching workers. 00:45:44.390 ======================================================== 00:45:44.390 Latency(us) 00:45:44.390 Device Information : IOPS MiB/s Average min max 00:45:44.390 PCIE (0000:00:10.0) NSID 1 from core 0: 7599.72 29.69 2103.57 966.04 7078.71 00:45:44.390 PCIE (0000:00:11.0) NSID 1 from core 0: 7599.72 29.69 2104.83 982.66 6995.09 00:45:44.390 PCIE (0000:00:13.0) NSID 1 from core 0: 7599.72 29.69 2104.79 988.27 6893.62 00:45:44.390 PCIE (0000:00:12.0) NSID 1 from core 0: 7599.72 29.69 2104.75 950.06 6695.13 00:45:44.390 PCIE (0000:00:12.0) NSID 2 from core 0: 7599.72 29.69 2104.72 923.41 6771.84 00:45:44.390 PCIE (0000:00:12.0) NSID 3 from core 0: 7599.72 29.69 2104.67 867.88 6455.58 00:45:44.390 ======================================================== 00:45:44.390 Total : 45598.31 178.12 2104.56 867.88 7078.71 00:45:44.390 00:45:44.390 09:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65556 00:45:44.390 09:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65625 00:45:44.390 09:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:45:44.390 09:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65626 00:45:44.390 09:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:45:44.390 09:56:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:45:47.674 Initializing NVMe Controllers 00:45:47.674 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:47.674 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:45:47.674 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:45:47.674 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:45:47.674 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:45:47.674 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:45:47.674 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:45:47.674 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:45:47.674 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:45:47.674 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:45:47.674 Initialization complete. Launching workers. 00:45:47.674 ======================================================== 00:45:47.674 Latency(us) 00:45:47.674 Device Information : IOPS MiB/s Average min max 00:45:47.675 PCIE (0000:00:10.0) NSID 1 from core 1: 5422.22 21.18 2948.96 960.01 11633.09 00:45:47.675 PCIE (0000:00:11.0) NSID 1 from core 1: 5422.22 21.18 2950.28 985.34 11515.85 00:45:47.675 PCIE (0000:00:13.0) NSID 1 from core 1: 5422.22 21.18 2950.28 969.66 11993.97 00:45:47.675 PCIE (0000:00:12.0) NSID 1 from core 1: 5427.55 21.20 2947.37 973.78 11193.21 00:45:47.675 PCIE (0000:00:12.0) NSID 2 from core 1: 5427.55 21.20 2947.28 978.68 10722.41 00:45:47.675 PCIE (0000:00:12.0) NSID 3 from core 1: 5427.55 21.20 2947.27 982.26 10991.76 00:45:47.675 ======================================================== 00:45:47.675 Total : 32549.32 127.15 2948.57 960.01 11993.97 00:45:47.675 00:45:47.675 Initializing NVMe Controllers 00:45:47.675 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:47.675 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:45:47.675 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:45:47.675 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:45:47.675 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:45:47.675 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:45:47.675 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:45:47.675 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:45:47.675 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:45:47.675 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:45:47.675 Initialization complete. Launching workers. 00:45:47.675 ======================================================== 00:45:47.675 Latency(us) 00:45:47.675 Device Information : IOPS MiB/s Average min max 00:45:47.675 PCIE (0000:00:10.0) NSID 1 from core 0: 5145.98 20.10 3107.08 966.11 6719.86 00:45:47.675 PCIE (0000:00:11.0) NSID 1 from core 0: 5145.98 20.10 3108.59 988.59 6327.02 00:45:47.675 PCIE (0000:00:13.0) NSID 1 from core 0: 5145.98 20.10 3108.41 1008.44 6924.16 00:45:47.675 PCIE (0000:00:12.0) NSID 1 from core 0: 5145.98 20.10 3108.20 1003.58 6882.17 00:45:47.675 PCIE (0000:00:12.0) NSID 2 from core 0: 5145.98 20.10 3107.97 1004.31 7289.35 00:45:47.675 PCIE (0000:00:12.0) NSID 3 from core 0: 5145.98 20.10 3107.73 1011.91 7238.87 00:45:47.675 ======================================================== 00:45:47.675 Total : 30875.90 120.61 3108.00 966.11 7289.35 00:45:47.675 00:45:49.623 Initializing NVMe Controllers 00:45:49.623 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:49.623 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:45:49.623 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:45:49.623 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:45:49.623 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:45:49.623 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:45:49.623 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:45:49.623 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:45:49.623 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:45:49.623 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:45:49.623 Initialization complete. Launching workers. 00:45:49.623 ======================================================== 00:45:49.623 Latency(us) 00:45:49.623 Device Information : IOPS MiB/s Average min max 00:45:49.623 PCIE (0000:00:10.0) NSID 1 from core 2: 3506.73 13.70 4560.11 1056.18 13497.48 00:45:49.623 PCIE (0000:00:11.0) NSID 1 from core 2: 3506.73 13.70 4561.67 1033.67 13941.86 00:45:49.623 PCIE (0000:00:13.0) NSID 1 from core 2: 3506.73 13.70 4562.04 1090.61 13960.55 00:45:49.623 PCIE (0000:00:12.0) NSID 1 from core 2: 3506.73 13.70 4558.50 1101.23 13533.24 00:45:49.623 PCIE (0000:00:12.0) NSID 2 from core 2: 3506.73 13.70 4557.93 1034.66 13593.55 00:45:49.623 PCIE (0000:00:12.0) NSID 3 from core 2: 3506.73 13.70 4558.06 964.60 13843.77 00:45:49.623 ======================================================== 00:45:49.623 Total : 21040.41 82.19 4559.72 964.60 13960.55 00:45:49.623 00:45:49.883 09:56:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65625 00:45:49.883 09:56:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65626 00:45:49.883 ************************************ 00:45:49.883 END TEST nvme_multi_secondary 00:45:49.883 ************************************ 00:45:49.883 00:45:49.883 real 0m10.876s 00:45:49.883 user 0m18.599s 00:45:49.883 sys 0m1.012s 00:45:49.883 09:56:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:49.883 09:56:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:45:49.883 09:56:56 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:45:49.883 09:56:56 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:45:49.883 09:56:56 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64557 ]] 00:45:49.883 09:56:56 nvme -- common/autotest_common.sh@1094 -- # kill 64557 00:45:49.883 09:56:56 nvme -- common/autotest_common.sh@1095 -- # wait 64557 00:45:49.883 [2024-12-09 09:56:56.709970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.710244] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.710323] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.710360] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.713219] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.713465] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.713503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.713535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.716455] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.716680] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.716729] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.716761] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.719628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.719845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.719885] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:49.883 [2024-12-09 09:56:56.719914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65492) is not found. Dropping the request. 00:45:50.142 09:56:56 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:45:50.142 09:56:56 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:45:50.142 09:56:56 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:45:50.142 09:56:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:50.142 09:56:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:50.142 09:56:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:50.142 ************************************ 00:45:50.142 START TEST bdev_nvme_reset_stuck_adm_cmd 00:45:50.142 ************************************ 00:45:50.142 09:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:45:50.142 * Looking for test storage... 00:45:50.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:50.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:50.142 --rc genhtml_branch_coverage=1 00:45:50.142 --rc genhtml_function_coverage=1 00:45:50.142 --rc genhtml_legend=1 00:45:50.142 --rc geninfo_all_blocks=1 00:45:50.142 --rc geninfo_unexecuted_blocks=1 00:45:50.142 00:45:50.142 ' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:50.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:50.142 --rc genhtml_branch_coverage=1 00:45:50.142 --rc genhtml_function_coverage=1 00:45:50.142 --rc genhtml_legend=1 00:45:50.142 --rc geninfo_all_blocks=1 00:45:50.142 --rc geninfo_unexecuted_blocks=1 00:45:50.142 00:45:50.142 ' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:50.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:50.142 --rc genhtml_branch_coverage=1 00:45:50.142 --rc genhtml_function_coverage=1 00:45:50.142 --rc genhtml_legend=1 00:45:50.142 --rc geninfo_all_blocks=1 00:45:50.142 --rc geninfo_unexecuted_blocks=1 00:45:50.142 00:45:50.142 ' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:50.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:50.142 --rc genhtml_branch_coverage=1 00:45:50.142 --rc genhtml_function_coverage=1 00:45:50.142 --rc genhtml_legend=1 00:45:50.142 --rc geninfo_all_blocks=1 00:45:50.142 --rc geninfo_unexecuted_blocks=1 00:45:50.142 00:45:50.142 ' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65792 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65792 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65792 ']' 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:50.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:50.142 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:50.401 09:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:50.401 [2024-12-09 09:56:57.287743] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:50.401 [2024-12-09 09:56:57.288130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65792 ] 00:45:50.660 [2024-12-09 09:56:57.488453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:50.660 [2024-12-09 09:56:57.653951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:50.660 [2024-12-09 09:56:57.654066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:50.660 [2024-12-09 09:56:57.654223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:50.660 [2024-12-09 09:56:57.654234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:51.595 nvme0n1 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_U8ANh.txt 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.595 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:51.854 true 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733738218 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65815 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:51.854 09:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:53.756 [2024-12-09 09:57:00.661287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:45:53.756 [2024-12-09 09:57:00.661845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:45:53.756 [2024-12-09 09:57:00.662013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:45:53.756 [2024-12-09 09:57:00.662045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:53.756 [2024-12-09 09:57:00.664288] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:45:53.756 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65815 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65815 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65815 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_U8ANh.txt 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_U8ANh.txt 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65792 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65792 ']' 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65792 00:45:53.756 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:45:53.757 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:53.757 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65792 00:45:54.015 killing process with pid 65792 00:45:54.015 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:54.015 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:54.015 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65792' 00:45:54.015 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65792 00:45:54.015 09:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65792 00:45:56.570 09:57:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:45:56.570 09:57:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:45:56.570 00:45:56.570 real 0m6.130s 00:45:56.570 user 0m21.645s 00:45:56.570 sys 0m0.778s 00:45:56.570 09:57:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:56.570 ************************************ 00:45:56.570 END TEST bdev_nvme_reset_stuck_adm_cmd 00:45:56.570 ************************************ 00:45:56.570 09:57:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:56.570 09:57:03 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:45:56.570 09:57:03 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:45:56.570 09:57:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:56.570 09:57:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:56.570 09:57:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:56.570 ************************************ 00:45:56.570 START TEST nvme_fio 00:45:56.570 ************************************ 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:45:56.570 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:45:56.570 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:45:56.829 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:45:56.829 09:57:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:45:56.829 09:57:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:57.087 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:45:57.087 fio-3.35 00:45:57.087 Starting 1 thread 00:46:00.371 00:46:00.371 test: (groupid=0, jobs=1): err= 0: pid=65966: Mon Dec 9 09:57:06 2024 00:46:00.371 read: IOPS=13.1k, BW=51.1MiB/s (53.5MB/s)(102MiB/2001msec) 00:46:00.371 slat (usec): min=4, max=118, avg= 8.13, stdev= 3.29 00:46:00.371 clat (usec): min=264, max=10218, avg=4872.79, stdev=1129.29 00:46:00.371 lat (usec): min=270, max=10337, avg=4880.92, stdev=1131.04 00:46:00.371 clat percentiles (usec): 00:46:00.371 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3982], 00:46:00.371 | 30.00th=[ 4293], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4686], 00:46:00.371 | 70.00th=[ 4883], 80.00th=[ 5473], 90.00th=[ 6783], 95.00th=[ 7177], 00:46:00.371 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8848], 00:46:00.371 | 99.99th=[ 9765] 00:46:00.371 bw ( KiB/s): min=42368, max=55992, per=96.37%, avg=50381.33, stdev=7122.71, samples=3 00:46:00.371 iops : min=10592, max=13998, avg=12595.33, stdev=1780.68, samples=3 00:46:00.371 write: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(102MiB/2001msec); 0 zone resets 00:46:00.371 slat (nsec): min=5070, max=66686, avg=8241.01, stdev=3243.90 00:46:00.371 clat (usec): min=313, max=9636, avg=4881.63, stdev=1136.08 00:46:00.371 lat (usec): min=319, max=9648, avg=4889.87, stdev=1137.81 00:46:00.371 clat percentiles (usec): 00:46:00.371 | 1.00th=[ 3359], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3982], 00:46:00.371 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:46:00.371 | 70.00th=[ 4948], 80.00th=[ 5473], 90.00th=[ 6849], 95.00th=[ 7242], 00:46:00.371 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8848], 00:46:00.371 | 99.99th=[ 9372] 00:46:00.371 bw ( KiB/s): min=42712, max=55328, per=96.43%, avg=50389.33, stdev=6739.15, samples=3 00:46:00.371 iops : min=10678, max=13832, avg=12597.33, stdev=1684.79, samples=3 00:46:00.371 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:46:00.371 lat (msec) : 2=0.06%, 4=20.44%, 10=79.44%, 20=0.01% 00:46:00.371 cpu : usr=98.65%, sys=0.25%, ctx=4, majf=0, minf=608 00:46:00.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:46:00.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:00.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:00.371 issued rwts: total=26153,26141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:00.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:00.371 00:46:00.371 Run status group 0 (all jobs): 00:46:00.371 READ: bw=51.1MiB/s (53.5MB/s), 51.1MiB/s-51.1MiB/s (53.5MB/s-53.5MB/s), io=102MiB (107MB), run=2001-2001msec 00:46:00.371 WRITE: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=102MiB (107MB), run=2001-2001msec 00:46:00.371 ----------------------------------------------------- 00:46:00.371 Suppressions used: 00:46:00.371 count bytes template 00:46:00.371 1 32 /usr/src/fio/parse.c 00:46:00.371 1 8 libtcmalloc_minimal.so 00:46:00.371 ----------------------------------------------------- 00:46:00.371 00:46:00.371 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:46:00.371 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:46:00.371 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:46:00.371 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:46:00.629 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:46:00.629 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:46:00.887 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:46:00.887 09:57:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:00.887 09:57:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:46:01.144 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:46:01.144 fio-3.35 00:46:01.144 Starting 1 thread 00:46:04.448 00:46:04.448 test: (groupid=0, jobs=1): err= 0: pid=66026: Mon Dec 9 09:57:11 2024 00:46:04.448 read: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(113MiB/2001msec) 00:46:04.448 slat (usec): min=4, max=149, avg= 7.43, stdev= 3.11 00:46:04.448 clat (usec): min=350, max=11252, avg=4422.31, stdev=1198.52 00:46:04.448 lat (usec): min=359, max=11338, avg=4429.74, stdev=1200.36 00:46:04.448 clat percentiles (usec): 00:46:04.448 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3687], 00:46:04.448 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:46:04.448 | 70.00th=[ 4293], 80.00th=[ 5080], 90.00th=[ 6390], 95.00th=[ 7242], 00:46:04.448 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8979], 99.95th=[ 9372], 00:46:04.448 | 99.99th=[11076] 00:46:04.448 bw ( KiB/s): min=49160, max=61760, per=97.04%, avg=55997.33, stdev=6368.37, samples=3 00:46:04.448 iops : min=12290, max=15440, avg=13999.33, stdev=1592.09, samples=3 00:46:04.448 write: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(113MiB/2001msec); 0 zone resets 00:46:04.448 slat (nsec): min=5032, max=69771, avg=7576.61, stdev=2972.99 00:46:04.448 clat (usec): min=413, max=11062, avg=4415.22, stdev=1193.20 00:46:04.448 lat (usec): min=421, max=11070, avg=4422.80, stdev=1195.02 00:46:04.448 clat percentiles (usec): 00:46:04.448 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3687], 00:46:04.448 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:46:04.448 | 70.00th=[ 4228], 80.00th=[ 5080], 90.00th=[ 6390], 95.00th=[ 7177], 00:46:04.448 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8979], 99.95th=[ 9503], 00:46:04.448 | 99.99th=[10814] 00:46:04.448 bw ( KiB/s): min=48960, max=61616, per=96.95%, avg=55994.67, stdev=6445.29, samples=3 00:46:04.448 iops : min=12240, max=15404, avg=13998.67, stdev=1611.32, samples=3 00:46:04.448 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:46:04.448 lat (msec) : 2=0.04%, 4=62.79%, 10=37.10%, 20=0.04% 00:46:04.448 cpu : usr=98.50%, sys=0.40%, ctx=5, majf=0, minf=608 00:46:04.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:46:04.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:04.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:04.448 issued rwts: total=28867,28893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:04.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:04.448 00:46:04.448 Run status group 0 (all jobs): 00:46:04.448 READ: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=113MiB (118MB), run=2001-2001msec 00:46:04.448 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=113MiB (118MB), run=2001-2001msec 00:46:04.448 ----------------------------------------------------- 00:46:04.448 Suppressions used: 00:46:04.448 count bytes template 00:46:04.448 1 32 /usr/src/fio/parse.c 00:46:04.448 1 8 libtcmalloc_minimal.so 00:46:04.448 ----------------------------------------------------- 00:46:04.448 00:46:04.448 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:46:04.448 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:46:04.448 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:46:04.448 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:46:04.707 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:46:04.707 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:46:04.965 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:46:04.965 09:57:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:04.965 09:57:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:46:05.223 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:46:05.223 fio-3.35 00:46:05.223 Starting 1 thread 00:46:08.503 00:46:08.503 test: (groupid=0, jobs=1): err= 0: pid=66087: Mon Dec 9 09:57:15 2024 00:46:08.503 read: IOPS=16.6k, BW=64.8MiB/s (67.9MB/s)(130MiB/2001msec) 00:46:08.503 slat (nsec): min=4703, max=67840, avg=6447.76, stdev=2223.85 00:46:08.503 clat (usec): min=293, max=8812, avg=3840.51, stdev=644.72 00:46:08.503 lat (usec): min=299, max=8871, avg=3846.96, stdev=645.48 00:46:08.503 clat percentiles (usec): 00:46:08.503 | 1.00th=[ 2442], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3458], 00:46:08.503 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:46:08.503 | 70.00th=[ 3982], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4752], 00:46:08.503 | 99.00th=[ 5735], 99.50th=[ 6718], 99.90th=[ 7701], 99.95th=[ 8094], 00:46:08.503 | 99.99th=[ 8586] 00:46:08.503 bw ( KiB/s): min=66016, max=70056, per=100.00%, avg=68434.67, stdev=2134.76, samples=3 00:46:08.503 iops : min=16504, max=17514, avg=17108.67, stdev=533.69, samples=3 00:46:08.503 write: IOPS=16.6k, BW=64.9MiB/s (68.0MB/s)(130MiB/2001msec); 0 zone resets 00:46:08.503 slat (nsec): min=4782, max=45844, avg=6584.03, stdev=2046.70 00:46:08.503 clat (usec): min=320, max=8682, avg=3843.68, stdev=642.97 00:46:08.503 lat (usec): min=326, max=8692, avg=3850.27, stdev=643.67 00:46:08.503 clat percentiles (usec): 00:46:08.503 | 1.00th=[ 2442], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3490], 00:46:08.503 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:46:08.503 | 70.00th=[ 4015], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4752], 00:46:08.503 | 99.00th=[ 5735], 99.50th=[ 6652], 99.90th=[ 7832], 99.95th=[ 8094], 00:46:08.503 | 99.99th=[ 8586] 00:46:08.503 bw ( KiB/s): min=66392, max=69616, per=100.00%, avg=68261.33, stdev=1672.48, samples=3 00:46:08.503 iops : min=16598, max=17404, avg=17065.33, stdev=418.12, samples=3 00:46:08.503 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:46:08.503 lat (msec) : 2=0.22%, 4=69.90%, 10=29.85% 00:46:08.503 cpu : usr=98.95%, sys=0.10%, ctx=6, majf=0, minf=607 00:46:08.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:46:08.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:08.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:08.503 issued rwts: total=33179,33237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:08.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:08.503 00:46:08.503 Run status group 0 (all jobs): 00:46:08.503 READ: bw=64.8MiB/s (67.9MB/s), 64.8MiB/s-64.8MiB/s (67.9MB/s-67.9MB/s), io=130MiB (136MB), run=2001-2001msec 00:46:08.503 WRITE: bw=64.9MiB/s (68.0MB/s), 64.9MiB/s-64.9MiB/s (68.0MB/s-68.0MB/s), io=130MiB (136MB), run=2001-2001msec 00:46:08.761 ----------------------------------------------------- 00:46:08.761 Suppressions used: 00:46:08.761 count bytes template 00:46:08.761 1 32 /usr/src/fio/parse.c 00:46:08.761 1 8 libtcmalloc_minimal.so 00:46:08.761 ----------------------------------------------------- 00:46:08.761 00:46:08.761 09:57:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:46:08.761 09:57:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:46:08.761 09:57:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:46:08.761 09:57:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:46:09.018 09:57:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:46:09.018 09:57:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:46:09.276 09:57:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:46:09.276 09:57:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:09.276 09:57:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:46:09.533 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:46:09.533 fio-3.35 00:46:09.533 Starting 1 thread 00:46:13.718 00:46:13.718 test: (groupid=0, jobs=1): err= 0: pid=66153: Mon Dec 9 09:57:20 2024 00:46:13.718 read: IOPS=16.3k, BW=63.7MiB/s (66.7MB/s)(127MiB/2001msec) 00:46:13.718 slat (nsec): min=4663, max=72710, avg=6471.42, stdev=2246.68 00:46:13.718 clat (usec): min=397, max=8836, avg=3904.30, stdev=690.90 00:46:13.718 lat (usec): min=404, max=8849, avg=3910.77, stdev=691.78 00:46:13.718 clat percentiles (usec): 00:46:13.718 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3458], 00:46:13.718 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3752], 00:46:13.718 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5080], 00:46:13.718 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8225], 00:46:13.718 | 99.99th=[ 8717] 00:46:13.718 bw ( KiB/s): min=65032, max=68287, per=100.00%, avg=66354.33, stdev=1711.18, samples=3 00:46:13.718 iops : min=16258, max=17071, avg=16588.33, stdev=427.37, samples=3 00:46:13.718 write: IOPS=16.3k, BW=63.8MiB/s (66.9MB/s)(128MiB/2001msec); 0 zone resets 00:46:13.718 slat (nsec): min=4758, max=81826, avg=6609.52, stdev=2284.35 00:46:13.718 clat (usec): min=233, max=8942, avg=3913.66, stdev=692.09 00:46:13.718 lat (usec): min=240, max=8949, avg=3920.27, stdev=692.95 00:46:13.718 clat percentiles (usec): 00:46:13.718 | 1.00th=[ 2900], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:46:13.718 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3752], 00:46:13.718 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5145], 00:46:13.718 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8225], 00:46:13.718 | 99.99th=[ 8586] 00:46:13.718 bw ( KiB/s): min=64920, max=68463, per=100.00%, avg=66263.67, stdev=1920.24, samples=3 00:46:13.718 iops : min=16230, max=17115, avg=16565.67, stdev=479.63, samples=3 00:46:13.718 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:46:13.718 lat (msec) : 2=0.10%, 4=65.40%, 10=34.46% 00:46:13.718 cpu : usr=98.85%, sys=0.15%, ctx=3, majf=0, minf=605 00:46:13.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:46:13.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:13.718 issued rwts: total=32607,32688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:13.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:13.718 00:46:13.718 Run status group 0 (all jobs): 00:46:13.718 READ: bw=63.7MiB/s (66.7MB/s), 63.7MiB/s-63.7MiB/s (66.7MB/s-66.7MB/s), io=127MiB (134MB), run=2001-2001msec 00:46:13.718 WRITE: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=128MiB (134MB), run=2001-2001msec 00:46:13.976 ----------------------------------------------------- 00:46:13.976 Suppressions used: 00:46:13.976 count bytes template 00:46:13.976 1 32 /usr/src/fio/parse.c 00:46:13.976 1 8 libtcmalloc_minimal.so 00:46:13.976 ----------------------------------------------------- 00:46:13.976 00:46:13.976 09:57:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:46:13.976 09:57:20 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:46:13.976 00:46:13.976 real 0m17.819s 00:46:13.976 user 0m13.846s 00:46:13.976 sys 0m3.361s 00:46:13.976 09:57:20 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:13.976 ************************************ 00:46:13.976 END TEST nvme_fio 00:46:13.976 ************************************ 00:46:13.976 09:57:20 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:46:13.976 00:46:13.976 real 1m32.723s 00:46:13.976 user 3m47.524s 00:46:13.976 sys 0m16.624s 00:46:13.976 09:57:20 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:13.976 ************************************ 00:46:13.976 END TEST nvme 00:46:13.976 ************************************ 00:46:13.976 09:57:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:46:14.235 09:57:21 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:46:14.235 09:57:21 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:46:14.235 09:57:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:14.235 09:57:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:14.235 09:57:21 -- common/autotest_common.sh@10 -- # set +x 00:46:14.235 ************************************ 00:46:14.235 START TEST nvme_scc 00:46:14.235 ************************************ 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:46:14.235 * Looking for test storage... 00:46:14.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@345 -- # : 1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@368 -- # return 0 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.235 --rc genhtml_branch_coverage=1 00:46:14.235 --rc genhtml_function_coverage=1 00:46:14.235 --rc genhtml_legend=1 00:46:14.235 --rc geninfo_all_blocks=1 00:46:14.235 --rc geninfo_unexecuted_blocks=1 00:46:14.235 00:46:14.235 ' 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.235 --rc genhtml_branch_coverage=1 00:46:14.235 --rc genhtml_function_coverage=1 00:46:14.235 --rc genhtml_legend=1 00:46:14.235 --rc geninfo_all_blocks=1 00:46:14.235 --rc geninfo_unexecuted_blocks=1 00:46:14.235 00:46:14.235 ' 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.235 --rc genhtml_branch_coverage=1 00:46:14.235 --rc genhtml_function_coverage=1 00:46:14.235 --rc genhtml_legend=1 00:46:14.235 --rc geninfo_all_blocks=1 00:46:14.235 --rc geninfo_unexecuted_blocks=1 00:46:14.235 00:46:14.235 ' 00:46:14.235 09:57:21 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:14.235 --rc genhtml_branch_coverage=1 00:46:14.235 --rc genhtml_function_coverage=1 00:46:14.235 --rc genhtml_legend=1 00:46:14.235 --rc geninfo_all_blocks=1 00:46:14.235 --rc geninfo_unexecuted_blocks=1 00:46:14.235 00:46:14.235 ' 00:46:14.235 09:57:21 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:14.235 09:57:21 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:14.235 09:57:21 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.235 09:57:21 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.235 09:57:21 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.235 09:57:21 nvme_scc -- paths/export.sh@5 -- # export PATH 00:46:14.235 09:57:21 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:46:14.235 09:57:21 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:46:14.235 09:57:21 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:14.235 09:57:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:46:14.235 09:57:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:46:14.235 09:57:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:46:14.235 09:57:21 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:14.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:14.863 Waiting for block devices as requested 00:46:14.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:15.121 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:15.121 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:46:15.121 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:46:20.389 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:46:20.389 09:57:27 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:46:20.389 09:57:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:46:20.389 09:57:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:46:20.389 09:57:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:20.389 09:57:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:46:20.389 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.390 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.391 09:57:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.392 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.393 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.394 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:46:20.395 09:57:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:46:20.395 09:57:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:46:20.395 09:57:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:20.395 09:57:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.395 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.396 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.659 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.660 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.661 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.662 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:46:20.663 09:57:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:46:20.663 09:57:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:46:20.663 09:57:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:20.663 09:57:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:46:20.663 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.664 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:46:20.665 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:46:20.666 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.929 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.930 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.931 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:46:20.932 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:46:20.933 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.934 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.935 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:20.936 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.197 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:46:21.198 09:57:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.198 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:46:21.199 09:57:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:46:21.199 09:57:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:46:21.199 09:57:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:21.199 09:57:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.199 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:46:21.200 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.201 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:46:21.202 09:57:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:46:21.202 09:57:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:46:21.202 09:57:28 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:21.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:22.334 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:46:22.334 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:22.334 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:22.334 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:46:22.594 09:57:29 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:46:22.594 09:57:29 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:22.594 09:57:29 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:22.594 09:57:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:46:22.594 ************************************ 00:46:22.594 START TEST nvme_simple_copy 00:46:22.594 ************************************ 00:46:22.594 09:57:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:46:22.852 Initializing NVMe Controllers 00:46:22.852 Attaching to 0000:00:10.0 00:46:22.852 Controller supports SCC. Attached to 0000:00:10.0 00:46:22.852 Namespace ID: 1 size: 6GB 00:46:22.852 Initialization complete. 00:46:22.852 00:46:22.852 Controller QEMU NVMe Ctrl (12340 ) 00:46:22.852 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:46:22.852 Namespace Block Size:4096 00:46:22.852 Writing LBAs 0 to 63 with Random Data 00:46:22.852 Copied LBAs from 0 - 63 to the Destination LBA 256 00:46:22.852 LBAs matching Written Data: 64 00:46:22.852 00:46:22.852 real 0m0.340s 00:46:22.852 user 0m0.141s 00:46:22.852 sys 0m0.096s 00:46:22.852 ************************************ 00:46:22.852 END TEST nvme_simple_copy 00:46:22.852 ************************************ 00:46:22.852 09:57:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:22.852 09:57:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:46:22.852 ************************************ 00:46:22.852 END TEST nvme_scc 00:46:22.852 ************************************ 00:46:22.852 00:46:22.852 real 0m8.762s 00:46:22.852 user 0m1.799s 00:46:22.852 sys 0m1.763s 00:46:22.852 09:57:29 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:22.852 09:57:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:46:22.852 09:57:29 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:46:22.852 09:57:29 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:46:22.852 09:57:29 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:46:22.852 09:57:29 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:46:22.852 09:57:29 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:46:22.852 09:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:22.852 09:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:22.852 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:46:22.852 ************************************ 00:46:22.852 START TEST nvme_fdp 00:46:22.852 ************************************ 00:46:22.852 09:57:29 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:46:23.110 * Looking for test storage... 00:46:23.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:46:23.110 09:57:29 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:23.111 09:57:29 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:46:23.111 09:57:29 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:23.111 09:57:30 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:46:23.111 09:57:30 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:23.111 09:57:30 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:23.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:23.111 --rc genhtml_branch_coverage=1 00:46:23.111 --rc genhtml_function_coverage=1 00:46:23.111 --rc genhtml_legend=1 00:46:23.111 --rc geninfo_all_blocks=1 00:46:23.111 --rc geninfo_unexecuted_blocks=1 00:46:23.111 00:46:23.111 ' 00:46:23.111 09:57:30 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:23.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:23.111 --rc genhtml_branch_coverage=1 00:46:23.111 --rc genhtml_function_coverage=1 00:46:23.111 --rc genhtml_legend=1 00:46:23.111 --rc geninfo_all_blocks=1 00:46:23.111 --rc geninfo_unexecuted_blocks=1 00:46:23.111 00:46:23.111 ' 00:46:23.111 09:57:30 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:23.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:23.111 --rc genhtml_branch_coverage=1 00:46:23.111 --rc genhtml_function_coverage=1 00:46:23.111 --rc genhtml_legend=1 00:46:23.111 --rc geninfo_all_blocks=1 00:46:23.111 --rc geninfo_unexecuted_blocks=1 00:46:23.111 00:46:23.111 ' 00:46:23.111 09:57:30 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:23.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:23.111 --rc genhtml_branch_coverage=1 00:46:23.111 --rc genhtml_function_coverage=1 00:46:23.111 --rc genhtml_legend=1 00:46:23.111 --rc geninfo_all_blocks=1 00:46:23.111 --rc geninfo_unexecuted_blocks=1 00:46:23.111 00:46:23.111 ' 00:46:23.111 09:57:30 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:23.111 09:57:30 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:23.111 09:57:30 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:23.111 09:57:30 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:23.111 09:57:30 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:23.111 09:57:30 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:46:23.111 09:57:30 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:46:23.111 09:57:30 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:46:23.111 09:57:30 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:23.111 09:57:30 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:23.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:23.679 Waiting for block devices as requested 00:46:23.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:23.938 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:23.938 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:46:23.938 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:46:29.207 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:46:29.207 09:57:36 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:46:29.207 09:57:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:46:29.207 09:57:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:46:29.207 09:57:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:29.207 09:57:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:46:29.207 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.208 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:46:29.209 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:46:29.210 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.211 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.212 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:46:29.213 09:57:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:46:29.213 09:57:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:46:29.213 09:57:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:29.213 09:57:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:46:29.213 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.214 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.215 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.216 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.482 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.483 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:46:29.484 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:46:29.485 09:57:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:46:29.485 09:57:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:46:29.485 09:57:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:29.485 09:57:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:29.485 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:46:29.486 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.487 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.488 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:46:29.489 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.490 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:46:29.491 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.492 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:46:29.493 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.494 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.495 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:46:29.496 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.758 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:46:29.759 09:57:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:46:29.759 09:57:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:46:29.759 09:57:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:29.759 09:57:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:46:29.759 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:46:29.760 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:46:29.761 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:46:29.762 09:57:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:46:29.762 09:57:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:46:29.763 09:57:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:46:29.763 09:57:36 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:46:29.763 09:57:36 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:30.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:30.898 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:30.898 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:46:30.898 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:30.898 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:46:30.898 09:57:37 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:46:30.898 09:57:37 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:30.898 09:57:37 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:30.898 09:57:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:46:30.898 ************************************ 00:46:30.898 START TEST nvme_flexible_data_placement 00:46:30.898 ************************************ 00:46:30.898 09:57:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:46:31.157 Initializing NVMe Controllers 00:46:31.157 Attaching to 0000:00:13.0 00:46:31.157 Controller supports FDP Attached to 0000:00:13.0 00:46:31.158 Namespace ID: 1 Endurance Group ID: 1 00:46:31.158 Initialization complete. 00:46:31.158 00:46:31.158 ================================== 00:46:31.158 == FDP tests for Namespace: #01 == 00:46:31.158 ================================== 00:46:31.158 00:46:31.158 Get Feature: FDP: 00:46:31.158 ================= 00:46:31.158 Enabled: Yes 00:46:31.158 FDP configuration Index: 0 00:46:31.158 00:46:31.158 FDP configurations log page 00:46:31.158 =========================== 00:46:31.158 Number of FDP configurations: 1 00:46:31.158 Version: 0 00:46:31.158 Size: 112 00:46:31.158 FDP Configuration Descriptor: 0 00:46:31.158 Descriptor Size: 96 00:46:31.158 Reclaim Group Identifier format: 2 00:46:31.158 FDP Volatile Write Cache: Not Present 00:46:31.158 FDP Configuration: Valid 00:46:31.158 Vendor Specific Size: 0 00:46:31.158 Number of Reclaim Groups: 2 00:46:31.158 Number of Recalim Unit Handles: 8 00:46:31.158 Max Placement Identifiers: 128 00:46:31.158 Number of Namespaces Suppprted: 256 00:46:31.158 Reclaim unit Nominal Size: 6000000 bytes 00:46:31.158 Estimated Reclaim Unit Time Limit: Not Reported 00:46:31.158 RUH Desc #000: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #001: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #002: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #003: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #004: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #005: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #006: RUH Type: Initially Isolated 00:46:31.158 RUH Desc #007: RUH Type: Initially Isolated 00:46:31.158 00:46:31.158 FDP reclaim unit handle usage log page 00:46:31.158 ====================================== 00:46:31.158 Number of Reclaim Unit Handles: 8 00:46:31.158 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:46:31.158 RUH Usage Desc #001: RUH Attributes: Unused 00:46:31.158 RUH Usage Desc #002: RUH Attributes: Unused 00:46:31.158 RUH Usage Desc #003: RUH Attributes: Unused 00:46:31.158 RUH Usage Desc #004: RUH Attributes: Unused 00:46:31.158 RUH Usage Desc #005: RUH Attributes: Unused 00:46:31.158 RUH Usage Desc #006: RUH Attributes: Unused 00:46:31.158 RUH Usage Desc #007: RUH Attributes: Unused 00:46:31.158 00:46:31.158 FDP statistics log page 00:46:31.158 ======================= 00:46:31.158 Host bytes with metadata written: 836808704 00:46:31.158 Media bytes with metadata written: 836915200 00:46:31.158 Media bytes erased: 0 00:46:31.158 00:46:31.158 FDP Reclaim unit handle status 00:46:31.158 ============================== 00:46:31.158 Number of RUHS descriptors: 2 00:46:31.158 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000041f5 00:46:31.158 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:46:31.158 00:46:31.158 FDP write on placement id: 0 success 00:46:31.158 00:46:31.158 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:46:31.158 00:46:31.158 IO mgmt send: RUH update for Placement ID: #0 Success 00:46:31.158 00:46:31.158 Get Feature: FDP Events for Placement handle: #0 00:46:31.158 ======================== 00:46:31.158 Number of FDP Events: 6 00:46:31.158 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:46:31.158 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:46:31.158 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:46:31.158 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:46:31.158 FDP Event: #4 Type: Media Reallocated Enabled: No 00:46:31.158 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:46:31.158 00:46:31.158 FDP events log page 00:46:31.158 =================== 00:46:31.158 Number of FDP events: 1 00:46:31.158 FDP Event #0: 00:46:31.158 Event Type: RU Not Written to Capacity 00:46:31.158 Placement Identifier: Valid 00:46:31.158 NSID: Valid 00:46:31.158 Location: Valid 00:46:31.158 Placement Identifier: 0 00:46:31.158 Event Timestamp: 9 00:46:31.158 Namespace Identifier: 1 00:46:31.158 Reclaim Group Identifier: 0 00:46:31.158 Reclaim Unit Handle Identifier: 0 00:46:31.158 00:46:31.158 FDP test passed 00:46:31.158 00:46:31.158 real 0m0.311s 00:46:31.158 user 0m0.112s 00:46:31.158 sys 0m0.096s 00:46:31.158 09:57:38 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:31.158 09:57:38 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:46:31.158 ************************************ 00:46:31.158 END TEST nvme_flexible_data_placement 00:46:31.158 ************************************ 00:46:31.417 00:46:31.417 real 0m8.354s 00:46:31.417 user 0m1.529s 00:46:31.417 sys 0m1.841s 00:46:31.417 09:57:38 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:31.417 09:57:38 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:46:31.417 ************************************ 00:46:31.417 END TEST nvme_fdp 00:46:31.417 ************************************ 00:46:31.417 09:57:38 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:46:31.417 09:57:38 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:46:31.417 09:57:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:31.417 09:57:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:31.417 09:57:38 -- common/autotest_common.sh@10 -- # set +x 00:46:31.417 ************************************ 00:46:31.417 START TEST nvme_rpc 00:46:31.417 ************************************ 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:46:31.417 * Looking for test storage... 00:46:31.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:31.417 09:57:38 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:31.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.417 --rc genhtml_branch_coverage=1 00:46:31.417 --rc genhtml_function_coverage=1 00:46:31.417 --rc genhtml_legend=1 00:46:31.417 --rc geninfo_all_blocks=1 00:46:31.417 --rc geninfo_unexecuted_blocks=1 00:46:31.417 00:46:31.417 ' 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:31.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.417 --rc genhtml_branch_coverage=1 00:46:31.417 --rc genhtml_function_coverage=1 00:46:31.417 --rc genhtml_legend=1 00:46:31.417 --rc geninfo_all_blocks=1 00:46:31.417 --rc geninfo_unexecuted_blocks=1 00:46:31.417 00:46:31.417 ' 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:31.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.417 --rc genhtml_branch_coverage=1 00:46:31.417 --rc genhtml_function_coverage=1 00:46:31.417 --rc genhtml_legend=1 00:46:31.417 --rc geninfo_all_blocks=1 00:46:31.417 --rc geninfo_unexecuted_blocks=1 00:46:31.417 00:46:31.417 ' 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:31.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.417 --rc genhtml_branch_coverage=1 00:46:31.417 --rc genhtml_function_coverage=1 00:46:31.417 --rc genhtml_legend=1 00:46:31.417 --rc geninfo_all_blocks=1 00:46:31.417 --rc geninfo_unexecuted_blocks=1 00:46:31.417 00:46:31.417 ' 00:46:31.417 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:31.417 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:46:31.417 09:57:38 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:46:31.676 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:46:31.676 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67552 00:46:31.676 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:46:31.676 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:46:31.676 09:57:38 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67552 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67552 ']' 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:31.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:31.676 09:57:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:31.676 [2024-12-09 09:57:38.658398] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:31.676 [2024-12-09 09:57:38.658596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67552 ] 00:46:31.934 [2024-12-09 09:57:38.850145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:32.192 [2024-12-09 09:57:39.011077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:32.192 [2024-12-09 09:57:39.011088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:33.128 09:57:39 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:33.128 09:57:39 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:46:33.128 09:57:39 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:46:33.385 Nvme0n1 00:46:33.385 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:46:33.385 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:46:33.643 request: 00:46:33.643 { 00:46:33.643 "bdev_name": "Nvme0n1", 00:46:33.643 "filename": "non_existing_file", 00:46:33.643 "method": "bdev_nvme_apply_firmware", 00:46:33.643 "req_id": 1 00:46:33.643 } 00:46:33.643 Got JSON-RPC error response 00:46:33.643 response: 00:46:33.643 { 00:46:33.643 "code": -32603, 00:46:33.643 "message": "open file failed." 00:46:33.643 } 00:46:33.643 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:46:33.643 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:46:33.643 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:46:33.900 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:46:33.900 09:57:40 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67552 00:46:33.900 09:57:40 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67552 ']' 00:46:33.900 09:57:40 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67552 00:46:33.900 09:57:40 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:46:33.900 09:57:40 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:33.900 09:57:40 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67552 00:46:34.158 09:57:40 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:34.158 09:57:40 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:34.158 killing process with pid 67552 00:46:34.158 09:57:40 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67552' 00:46:34.159 09:57:40 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67552 00:46:34.159 09:57:40 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67552 00:46:36.689 00:46:36.689 real 0m4.893s 00:46:36.689 user 0m9.424s 00:46:36.689 sys 0m0.800s 00:46:36.689 09:57:43 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:36.689 09:57:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:46:36.689 ************************************ 00:46:36.689 END TEST nvme_rpc 00:46:36.689 ************************************ 00:46:36.689 09:57:43 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:46:36.689 09:57:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:36.689 09:57:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:36.689 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:46:36.689 ************************************ 00:46:36.689 START TEST nvme_rpc_timeouts 00:46:36.689 ************************************ 00:46:36.689 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:46:36.689 * Looking for test storage... 00:46:36.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:46:36.689 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:36.689 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:36.689 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:46:36.689 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:36.690 09:57:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:36.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.690 --rc genhtml_branch_coverage=1 00:46:36.690 --rc genhtml_function_coverage=1 00:46:36.690 --rc genhtml_legend=1 00:46:36.690 --rc geninfo_all_blocks=1 00:46:36.690 --rc geninfo_unexecuted_blocks=1 00:46:36.690 00:46:36.690 ' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:36.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.690 --rc genhtml_branch_coverage=1 00:46:36.690 --rc genhtml_function_coverage=1 00:46:36.690 --rc genhtml_legend=1 00:46:36.690 --rc geninfo_all_blocks=1 00:46:36.690 --rc geninfo_unexecuted_blocks=1 00:46:36.690 00:46:36.690 ' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:36.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.690 --rc genhtml_branch_coverage=1 00:46:36.690 --rc genhtml_function_coverage=1 00:46:36.690 --rc genhtml_legend=1 00:46:36.690 --rc geninfo_all_blocks=1 00:46:36.690 --rc geninfo_unexecuted_blocks=1 00:46:36.690 00:46:36.690 ' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:36.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.690 --rc genhtml_branch_coverage=1 00:46:36.690 --rc genhtml_function_coverage=1 00:46:36.690 --rc genhtml_legend=1 00:46:36.690 --rc geninfo_all_blocks=1 00:46:36.690 --rc geninfo_unexecuted_blocks=1 00:46:36.690 00:46:36.690 ' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67635 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67635 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67667 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:46:36.690 09:57:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67667 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67667 ']' 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:36.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:36.690 09:57:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:46:36.690 [2024-12-09 09:57:43.526906] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:36.690 [2024-12-09 09:57:43.527089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67667 ] 00:46:36.690 [2024-12-09 09:57:43.723241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:36.949 [2024-12-09 09:57:43.891648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:36.949 [2024-12-09 09:57:43.891651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:37.883 09:57:44 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:37.883 Checking default timeout settings: 00:46:37.883 09:57:44 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:46:37.883 09:57:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:46:37.883 09:57:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:46:38.449 Making settings changes with rpc: 00:46:38.449 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:46:38.449 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:46:38.449 Check default vs. modified settings: 00:46:38.449 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:46:38.449 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:46:39.015 Setting action_on_timeout is changed as expected. 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:46:39.015 Setting timeout_us is changed as expected. 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:46:39.015 Setting timeout_admin_us is changed as expected. 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67635 /tmp/settings_modified_67635 00:46:39.015 09:57:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67667 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67667 ']' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67667 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67667 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:39.015 killing process with pid 67667 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67667' 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67667 00:46:39.015 09:57:45 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67667 00:46:41.631 RPC TIMEOUT SETTING TEST PASSED. 00:46:41.631 09:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:46:41.631 00:46:41.631 real 0m4.974s 00:46:41.631 user 0m9.645s 00:46:41.631 sys 0m0.793s 00:46:41.631 09:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:41.631 09:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:46:41.631 ************************************ 00:46:41.631 END TEST nvme_rpc_timeouts 00:46:41.631 ************************************ 00:46:41.631 09:57:48 -- spdk/autotest.sh@239 -- # uname -s 00:46:41.631 09:57:48 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:46:41.631 09:57:48 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:46:41.631 09:57:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:41.631 09:57:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:41.631 09:57:48 -- common/autotest_common.sh@10 -- # set +x 00:46:41.631 ************************************ 00:46:41.631 START TEST sw_hotplug 00:46:41.631 ************************************ 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:46:41.631 * Looking for test storage... 00:46:41.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:41.631 09:57:48 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:41.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.631 --rc genhtml_branch_coverage=1 00:46:41.631 --rc genhtml_function_coverage=1 00:46:41.631 --rc genhtml_legend=1 00:46:41.631 --rc geninfo_all_blocks=1 00:46:41.631 --rc geninfo_unexecuted_blocks=1 00:46:41.631 00:46:41.631 ' 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:41.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.631 --rc genhtml_branch_coverage=1 00:46:41.631 --rc genhtml_function_coverage=1 00:46:41.631 --rc genhtml_legend=1 00:46:41.631 --rc geninfo_all_blocks=1 00:46:41.631 --rc geninfo_unexecuted_blocks=1 00:46:41.631 00:46:41.631 ' 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:41.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.631 --rc genhtml_branch_coverage=1 00:46:41.631 --rc genhtml_function_coverage=1 00:46:41.631 --rc genhtml_legend=1 00:46:41.631 --rc geninfo_all_blocks=1 00:46:41.631 --rc geninfo_unexecuted_blocks=1 00:46:41.631 00:46:41.631 ' 00:46:41.631 09:57:48 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:41.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.631 --rc genhtml_branch_coverage=1 00:46:41.631 --rc genhtml_function_coverage=1 00:46:41.631 --rc genhtml_legend=1 00:46:41.631 --rc geninfo_all_blocks=1 00:46:41.631 --rc geninfo_unexecuted_blocks=1 00:46:41.631 00:46:41.631 ' 00:46:41.631 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:41.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:41.890 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:41.890 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:41.890 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:41.890 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:41.890 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:46:41.890 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:46:41.890 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:46:41.890 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:46:41.890 09:57:48 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:46:41.890 09:57:48 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:46:41.890 09:57:48 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:46:41.890 09:57:48 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:46:41.890 09:57:48 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:46:41.890 09:57:48 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@233 -- # local class 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:46:42.149 09:57:48 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:46:42.150 09:57:48 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:46:42.150 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:46:42.150 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:46:42.150 09:57:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:42.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:42.665 Waiting for block devices as requested 00:46:42.665 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:42.665 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:42.924 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:46:42.924 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:46:48.187 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:46:48.187 09:57:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:46:48.187 09:57:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:48.444 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:46:48.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:48.444 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:46:48.702 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:46:49.286 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:49.286 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:46:49.286 09:57:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68537 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:46:49.286 09:57:56 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:46:49.286 09:57:56 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:46:49.286 09:57:56 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:46:49.286 09:57:56 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:46:49.286 09:57:56 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:46:49.286 09:57:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:46:49.553 Initializing NVMe Controllers 00:46:49.553 Attaching to 0000:00:10.0 00:46:49.553 Attaching to 0000:00:11.0 00:46:49.553 Attached to 0000:00:10.0 00:46:49.553 Attached to 0000:00:11.0 00:46:49.553 Initialization complete. Starting I/O... 00:46:49.553 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:46:49.553 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:46:49.553 00:46:50.487 QEMU NVMe Ctrl (12340 ): 1112 I/Os completed (+1112) 00:46:50.487 QEMU NVMe Ctrl (12341 ): 1130 I/Os completed (+1130) 00:46:50.487 00:46:51.860 QEMU NVMe Ctrl (12340 ): 2575 I/Os completed (+1463) 00:46:51.860 QEMU NVMe Ctrl (12341 ): 2598 I/Os completed (+1468) 00:46:51.860 00:46:52.427 QEMU NVMe Ctrl (12340 ): 4252 I/Os completed (+1677) 00:46:52.427 QEMU NVMe Ctrl (12341 ): 4281 I/Os completed (+1683) 00:46:52.427 00:46:53.800 QEMU NVMe Ctrl (12340 ): 6023 I/Os completed (+1771) 00:46:53.800 QEMU NVMe Ctrl (12341 ): 6064 I/Os completed (+1783) 00:46:53.800 00:46:54.734 QEMU NVMe Ctrl (12340 ): 7663 I/Os completed (+1640) 00:46:54.734 QEMU NVMe Ctrl (12341 ): 7812 I/Os completed (+1748) 00:46:54.734 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:55.302 [2024-12-09 09:58:02.222029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:46:55.302 Controller removed: QEMU NVMe Ctrl (12340 ) 00:46:55.302 [2024-12-09 09:58:02.224237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.224455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.224540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.224605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:46:55.302 [2024-12-09 09:58:02.227949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.228133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.228171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.228196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:46:55.302 EAL: Scan for (pci) bus failed. 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:55.302 [2024-12-09 09:58:02.254742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:46:55.302 Controller removed: QEMU NVMe Ctrl (12341 ) 00:46:55.302 [2024-12-09 09:58:02.256593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.256664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.256703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.256730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:46:55.302 [2024-12-09 09:58:02.259420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.259473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.259503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 [2024-12-09 09:58:02.259525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:46:55.302 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:55.302 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:46:55.302 EAL: Scan for (pci) bus failed. 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:55.560 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:46:55.560 Attaching to 0000:00:10.0 00:46:55.560 Attached to 0000:00:10.0 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:55.560 09:58:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:46:55.560 Attaching to 0000:00:11.0 00:46:55.561 Attached to 0000:00:11.0 00:46:56.496 QEMU NVMe Ctrl (12340 ): 1531 I/Os completed (+1531) 00:46:56.496 QEMU NVMe Ctrl (12341 ): 1564 I/Os completed (+1564) 00:46:56.496 00:46:57.431 QEMU NVMe Ctrl (12340 ): 3163 I/Os completed (+1632) 00:46:57.431 QEMU NVMe Ctrl (12341 ): 3236 I/Os completed (+1672) 00:46:57.431 00:46:58.834 QEMU NVMe Ctrl (12340 ): 4763 I/Os completed (+1600) 00:46:58.834 QEMU NVMe Ctrl (12341 ): 4855 I/Os completed (+1619) 00:46:58.834 00:46:59.771 QEMU NVMe Ctrl (12340 ): 6387 I/Os completed (+1624) 00:46:59.771 QEMU NVMe Ctrl (12341 ): 6526 I/Os completed (+1671) 00:46:59.771 00:47:00.705 QEMU NVMe Ctrl (12340 ): 8119 I/Os completed (+1732) 00:47:00.705 QEMU NVMe Ctrl (12341 ): 8272 I/Os completed (+1746) 00:47:00.705 00:47:01.639 QEMU NVMe Ctrl (12340 ): 9915 I/Os completed (+1796) 00:47:01.639 QEMU NVMe Ctrl (12341 ): 10078 I/Os completed (+1806) 00:47:01.639 00:47:02.574 QEMU NVMe Ctrl (12340 ): 11620 I/Os completed (+1705) 00:47:02.574 QEMU NVMe Ctrl (12341 ): 11828 I/Os completed (+1750) 00:47:02.574 00:47:03.507 QEMU NVMe Ctrl (12340 ): 13324 I/Os completed (+1704) 00:47:03.507 QEMU NVMe Ctrl (12341 ): 13547 I/Os completed (+1719) 00:47:03.507 00:47:04.441 QEMU NVMe Ctrl (12340 ): 15024 I/Os completed (+1700) 00:47:04.441 QEMU NVMe Ctrl (12341 ): 15302 I/Os completed (+1755) 00:47:04.441 00:47:05.815 QEMU NVMe Ctrl (12340 ): 16750 I/Os completed (+1726) 00:47:05.815 QEMU NVMe Ctrl (12341 ): 17050 I/Os completed (+1748) 00:47:05.815 00:47:06.814 QEMU NVMe Ctrl (12340 ): 18446 I/Os completed (+1696) 00:47:06.814 QEMU NVMe Ctrl (12341 ): 18761 I/Os completed (+1711) 00:47:06.814 00:47:07.750 QEMU NVMe Ctrl (12340 ): 20006 I/Os completed (+1560) 00:47:07.750 QEMU NVMe Ctrl (12341 ): 20358 I/Os completed (+1597) 00:47:07.750 00:47:07.750 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:47:07.750 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:07.751 [2024-12-09 09:58:14.585533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:47:07.751 Controller removed: QEMU NVMe Ctrl (12340 ) 00:47:07.751 [2024-12-09 09:58:14.587515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.587578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.587609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.587639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:47:07.751 [2024-12-09 09:58:14.590769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.590835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.590863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.590887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:07.751 [2024-12-09 09:58:14.614135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:47:07.751 Controller removed: QEMU NVMe Ctrl (12341 ) 00:47:07.751 [2024-12-09 09:58:14.615999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.616063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.616099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.616125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:47:07.751 [2024-12-09 09:58:14.618929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.618988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.619017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 [2024-12-09 09:58:14.619045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:07.751 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:47:08.010 Attaching to 0000:00:10.0 00:47:08.010 Attached to 0000:00:10.0 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:08.010 09:58:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:47:08.010 Attaching to 0000:00:11.0 00:47:08.010 Attached to 0000:00:11.0 00:47:08.574 QEMU NVMe Ctrl (12340 ): 1120 I/Os completed (+1120) 00:47:08.574 QEMU NVMe Ctrl (12341 ): 968 I/Os completed (+968) 00:47:08.574 00:47:09.509 QEMU NVMe Ctrl (12340 ): 2848 I/Os completed (+1728) 00:47:09.509 QEMU NVMe Ctrl (12341 ): 2734 I/Os completed (+1766) 00:47:09.509 00:47:10.451 QEMU NVMe Ctrl (12340 ): 4532 I/Os completed (+1684) 00:47:10.451 QEMU NVMe Ctrl (12341 ): 4510 I/Os completed (+1776) 00:47:10.451 00:47:11.841 QEMU NVMe Ctrl (12340 ): 6260 I/Os completed (+1728) 00:47:11.841 QEMU NVMe Ctrl (12341 ): 6273 I/Os completed (+1763) 00:47:11.841 00:47:12.776 QEMU NVMe Ctrl (12340 ): 7960 I/Os completed (+1700) 00:47:12.776 QEMU NVMe Ctrl (12341 ): 7994 I/Os completed (+1721) 00:47:12.776 00:47:13.710 QEMU NVMe Ctrl (12340 ): 9450 I/Os completed (+1490) 00:47:13.710 QEMU NVMe Ctrl (12341 ): 9588 I/Os completed (+1594) 00:47:13.710 00:47:14.646 QEMU NVMe Ctrl (12340 ): 10990 I/Os completed (+1540) 00:47:14.646 QEMU NVMe Ctrl (12341 ): 11206 I/Os completed (+1618) 00:47:14.646 00:47:15.580 QEMU NVMe Ctrl (12340 ): 12642 I/Os completed (+1652) 00:47:15.580 QEMU NVMe Ctrl (12341 ): 12907 I/Os completed (+1701) 00:47:15.580 00:47:16.515 QEMU NVMe Ctrl (12340 ): 14132 I/Os completed (+1490) 00:47:16.515 QEMU NVMe Ctrl (12341 ): 14553 I/Os completed (+1646) 00:47:16.515 00:47:17.450 QEMU NVMe Ctrl (12340 ): 15760 I/Os completed (+1628) 00:47:17.450 QEMU NVMe Ctrl (12341 ): 16232 I/Os completed (+1679) 00:47:17.450 00:47:18.823 QEMU NVMe Ctrl (12340 ): 17372 I/Os completed (+1612) 00:47:18.823 QEMU NVMe Ctrl (12341 ): 17939 I/Os completed (+1707) 00:47:18.823 00:47:19.756 QEMU NVMe Ctrl (12340 ): 18956 I/Os completed (+1584) 00:47:19.756 QEMU NVMe Ctrl (12341 ): 19566 I/Os completed (+1627) 00:47:19.756 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:20.014 [2024-12-09 09:58:26.911802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:47:20.014 Controller removed: QEMU NVMe Ctrl (12340 ) 00:47:20.014 [2024-12-09 09:58:26.913780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.913842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.913872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.913901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:47:20.014 [2024-12-09 09:58:26.917064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.917134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.917162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.917186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:20.014 [2024-12-09 09:58:26.942782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:47:20.014 Controller removed: QEMU NVMe Ctrl (12341 ) 00:47:20.014 [2024-12-09 09:58:26.944693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.944755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.944788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.944813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:47:20.014 [2024-12-09 09:58:26.947474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.947524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.947556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 [2024-12-09 09:58:26.947577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:47:20.014 09:58:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:20.014 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:47:20.014 EAL: Scan for (pci) bus failed. 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:47:20.273 Attaching to 0000:00:10.0 00:47:20.273 Attached to 0000:00:10.0 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:20.273 09:58:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:47:20.273 Attaching to 0000:00:11.0 00:47:20.273 Attached to 0000:00:11.0 00:47:20.273 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:47:20.273 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:47:20.273 [2024-12-09 09:58:27.270404] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:47:32.547 09:58:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:47:32.547 09:58:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:32.547 09:58:39 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.05 00:47:32.547 09:58:39 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.05 00:47:32.547 09:58:39 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:47:32.547 09:58:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.05 00:47:32.547 09:58:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:47:32.547 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 09:58:39 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68537 00:47:39.109 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68537) - No such process 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68537 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69077 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:47:39.109 09:58:45 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69077 00:47:39.109 09:58:45 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69077 ']' 00:47:39.109 09:58:45 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:39.109 09:58:45 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:39.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:39.109 09:58:45 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:39.109 09:58:45 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:39.109 09:58:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:39.109 [2024-12-09 09:58:45.429493] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:39.109 [2024-12-09 09:58:45.429669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69077 ] 00:47:39.109 [2024-12-09 09:58:45.619629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:39.109 [2024-12-09 09:58:45.803019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:47:39.674 09:58:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:47:39.674 09:58:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:46.406 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:46.406 09:58:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.406 09:58:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:46.406 09:58:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.406 [2024-12-09 09:58:52.777584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:47:46.406 [2024-12-09 09:58:52.780536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.406 [2024-12-09 09:58:52.780594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.406 [2024-12-09 09:58:52.780621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.406 [2024-12-09 09:58:52.780655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.406 [2024-12-09 09:58:52.780673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.406 [2024-12-09 09:58:52.780690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.406 [2024-12-09 09:58:52.780707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.406 [2024-12-09 09:58:52.780724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.406 [2024-12-09 09:58:52.780738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.406 [2024-12-09 09:58:52.780761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.407 [2024-12-09 09:58:52.780776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.407 [2024-12-09 09:58:52.780794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.407 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:47:46.407 09:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:46.407 [2024-12-09 09:58:53.177592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:47:46.407 [2024-12-09 09:58:53.180715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.407 [2024-12-09 09:58:53.180768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.407 [2024-12-09 09:58:53.180794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.407 [2024-12-09 09:58:53.180823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.407 [2024-12-09 09:58:53.180843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.407 [2024-12-09 09:58:53.180859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.407 [2024-12-09 09:58:53.180878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.407 [2024-12-09 09:58:53.180893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.407 [2024-12-09 09:58:53.180910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.407 [2024-12-09 09:58:53.180926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:46.407 [2024-12-09 09:58:53.180943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.407 [2024-12-09 09:58:53.180958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:46.407 09:58:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.407 09:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:46.407 09:58:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:46.407 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:46.666 09:58:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:58.863 09:59:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:58.863 09:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:58.863 09:59:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:58.863 [2024-12-09 09:59:05.677760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:47:58.863 [2024-12-09 09:59:05.681120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:58.863 [2024-12-09 09:59:05.681178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:58.863 [2024-12-09 09:59:05.681207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:58.863 [2024-12-09 09:59:05.681265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:58.863 [2024-12-09 09:59:05.681287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:58.863 [2024-12-09 09:59:05.681305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:58.863 [2024-12-09 09:59:05.681323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:58.863 [2024-12-09 09:59:05.681387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:58.863 [2024-12-09 09:59:05.681403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:58.863 [2024-12-09 09:59:05.681423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:58.863 [2024-12-09 09:59:05.681438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:58.863 [2024-12-09 09:59:05.681455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:58.863 09:59:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:58.863 09:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:58.863 09:59:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:58.863 09:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:59.429 [2024-12-09 09:59:06.177764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:47:59.430 [2024-12-09 09:59:06.180757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:59.430 [2024-12-09 09:59:06.180809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:59.430 [2024-12-09 09:59:06.180874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:59.430 [2024-12-09 09:59:06.180905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:59.430 [2024-12-09 09:59:06.180925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:59.430 [2024-12-09 09:59:06.180941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:59.430 [2024-12-09 09:59:06.180961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:59.430 [2024-12-09 09:59:06.180975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:59.430 [2024-12-09 09:59:06.180992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:59.430 [2024-12-09 09:59:06.181008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:59.430 [2024-12-09 09:59:06.181025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:59.430 [2024-12-09 09:59:06.181040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:59.430 09:59:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:59.430 09:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:59.430 09:59:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:59.430 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:59.688 09:59:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:11.889 09:59:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:11.889 09:59:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:11.889 09:59:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:11.889 [2024-12-09 09:59:18.677931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:48:11.889 [2024-12-09 09:59:18.681735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:11.889 [2024-12-09 09:59:18.681796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:11.889 [2024-12-09 09:59:18.681819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:11.889 [2024-12-09 09:59:18.681851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:11.889 [2024-12-09 09:59:18.681868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:11.889 [2024-12-09 09:59:18.681888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:11.889 [2024-12-09 09:59:18.681904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:11.889 [2024-12-09 09:59:18.681922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:11.889 [2024-12-09 09:59:18.681936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:11.889 [2024-12-09 09:59:18.681954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:11.889 [2024-12-09 09:59:18.681968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:11.889 [2024-12-09 09:59:18.681986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:11.889 09:59:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:11.889 09:59:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:11.889 09:59:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:11.889 09:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:12.170 [2024-12-09 09:59:19.077938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:48:12.170 [2024-12-09 09:59:19.080970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:12.170 [2024-12-09 09:59:19.081024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:12.170 [2024-12-09 09:59:19.081050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:12.170 [2024-12-09 09:59:19.081080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:12.170 [2024-12-09 09:59:19.081100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:12.170 [2024-12-09 09:59:19.081115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:12.170 [2024-12-09 09:59:19.081135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:12.170 [2024-12-09 09:59:19.081149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:12.170 [2024-12-09 09:59:19.081169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:12.170 [2024-12-09 09:59:19.081185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:12.170 [2024-12-09 09:59:19.081202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:12.170 [2024-12-09 09:59:19.081217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:12.428 09:59:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:12.428 09:59:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:12.428 09:59:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:12.428 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:12.686 09:59:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.99 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.99 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.99 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.99 2 00:48:24.885 remove_attach_helper took 44.99s to complete (handling 2 nvme drive(s)) 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:48:24.885 09:59:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:48:24.885 09:59:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:31.453 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:31.454 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:31.454 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:31.454 09:59:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:31.454 09:59:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:31.454 09:59:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:31.454 [2024-12-09 09:59:37.801801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:48:31.454 [2024-12-09 09:59:37.804128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:37.804202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:37.804232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 [2024-12-09 09:59:37.804278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:37.804297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:37.804314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 [2024-12-09 09:59:37.804345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:37.804379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:37.804395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 [2024-12-09 09:59:37.804414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:37.804428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:37.804449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:48:31.454 09:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:31.454 [2024-12-09 09:59:38.201800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:48:31.454 [2024-12-09 09:59:38.204383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:38.204477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:38.204503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 [2024-12-09 09:59:38.204531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:38.204551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:38.204565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 [2024-12-09 09:59:38.204583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:38.204597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:38.204614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 [2024-12-09 09:59:38.204629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:31.454 [2024-12-09 09:59:38.204645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:31.454 [2024-12-09 09:59:38.204676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:31.454 09:59:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:31.454 09:59:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:31.454 09:59:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:31.454 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:31.713 09:59:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:43.962 09:59:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:43.962 09:59:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:43.962 09:59:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:43.962 09:59:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:43.962 09:59:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:43.962 09:59:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:43.962 [2024-12-09 09:59:50.801952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:48:43.962 [2024-12-09 09:59:50.804094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.962 [2024-12-09 09:59:50.804156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.962 [2024-12-09 09:59:50.804179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.962 [2024-12-09 09:59:50.804211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.962 [2024-12-09 09:59:50.804229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.962 [2024-12-09 09:59:50.804246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.962 [2024-12-09 09:59:50.804278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.962 [2024-12-09 09:59:50.804297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.962 [2024-12-09 09:59:50.804311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.962 [2024-12-09 09:59:50.804330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.962 [2024-12-09 09:59:50.804345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.962 [2024-12-09 09:59:50.804362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:48:43.962 09:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:44.569 09:59:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:44.569 09:59:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:44.569 09:59:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:44.569 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:44.569 [2024-12-09 09:59:51.501969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:48:44.569 [2024-12-09 09:59:51.504276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:44.569 [2024-12-09 09:59:51.504325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:44.569 [2024-12-09 09:59:51.504351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:44.569 [2024-12-09 09:59:51.504381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:44.569 [2024-12-09 09:59:51.504405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:44.569 [2024-12-09 09:59:51.504421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:44.569 [2024-12-09 09:59:51.504440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:44.569 [2024-12-09 09:59:51.504455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:44.569 [2024-12-09 09:59:51.504476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:44.569 [2024-12-09 09:59:51.504492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:44.569 [2024-12-09 09:59:51.504509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:44.569 [2024-12-09 09:59:51.504523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:45.136 09:59:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:45.136 09:59:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:45.136 09:59:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:48:45.136 09:59:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:45.136 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:48:45.394 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:48:45.394 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:45.394 09:59:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:57.660 10:00:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:57.660 10:00:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:57.660 10:00:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:57.660 [2024-12-09 10:00:04.302163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:48:57.660 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:57.661 [2024-12-09 10:00:04.305219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.661 [2024-12-09 10:00:04.305322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.661 [2024-12-09 10:00:04.305346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.661 [2024-12-09 10:00:04.305380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.661 [2024-12-09 10:00:04.305397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.661 [2024-12-09 10:00:04.305415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.661 [2024-12-09 10:00:04.305432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.661 [2024-12-09 10:00:04.305452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.661 [2024-12-09 10:00:04.305466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.661 [2024-12-09 10:00:04.305485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.661 [2024-12-09 10:00:04.305500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.661 [2024-12-09 10:00:04.305517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.661 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:57.661 10:00:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.661 10:00:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:57.661 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:57.661 10:00:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:57.661 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:57.661 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:57.661 [2024-12-09 10:00:04.702173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:48:57.919 [2024-12-09 10:00:04.705123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.919 [2024-12-09 10:00:04.705176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.919 [2024-12-09 10:00:04.705202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.919 [2024-12-09 10:00:04.705230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.919 [2024-12-09 10:00:04.705291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.919 [2024-12-09 10:00:04.705308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.919 [2024-12-09 10:00:04.705328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.919 [2024-12-09 10:00:04.705343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.919 [2024-12-09 10:00:04.705360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.919 [2024-12-09 10:00:04.705377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:57.919 [2024-12-09 10:00:04.705397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:57.919 [2024-12-09 10:00:04.705411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:57.919 10:00:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.919 10:00:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:57.919 10:00:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:48:57.919 10:00:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:58.186 10:00:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.55 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.55 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.55 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.55 2 00:49:10.389 remove_attach_helper took 45.55s to complete (handling 2 nvme drive(s)) 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:49:10.389 10:00:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69077 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69077 ']' 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69077 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69077 00:49:10.389 killing process with pid 69077 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69077' 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69077 00:49:10.389 10:00:17 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69077 00:49:12.947 10:00:19 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:12.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:13.515 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:49:13.515 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:49:13.515 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:49:13.515 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:49:13.773 00:49:13.773 real 2m32.346s 00:49:13.773 user 1m52.489s 00:49:13.773 sys 0m19.667s 00:49:13.774 10:00:20 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:13.774 ************************************ 00:49:13.774 END TEST sw_hotplug 00:49:13.774 ************************************ 00:49:13.774 10:00:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:13.774 10:00:20 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:49:13.774 10:00:20 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:49:13.774 10:00:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:13.774 10:00:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:13.774 10:00:20 -- common/autotest_common.sh@10 -- # set +x 00:49:13.774 ************************************ 00:49:13.774 START TEST nvme_xnvme 00:49:13.774 ************************************ 00:49:13.774 10:00:20 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:49:13.774 * Looking for test storage... 00:49:13.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:49:13.774 10:00:20 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:13.774 10:00:20 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:49:13.774 10:00:20 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:14.035 10:00:20 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:14.035 10:00:20 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:49:14.036 10:00:20 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:49:14.036 10:00:20 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:14.036 10:00:20 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:14.036 10:00:20 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.036 --rc genhtml_branch_coverage=1 00:49:14.036 --rc genhtml_function_coverage=1 00:49:14.036 --rc genhtml_legend=1 00:49:14.036 --rc geninfo_all_blocks=1 00:49:14.036 --rc geninfo_unexecuted_blocks=1 00:49:14.036 00:49:14.036 ' 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.036 --rc genhtml_branch_coverage=1 00:49:14.036 --rc genhtml_function_coverage=1 00:49:14.036 --rc genhtml_legend=1 00:49:14.036 --rc geninfo_all_blocks=1 00:49:14.036 --rc geninfo_unexecuted_blocks=1 00:49:14.036 00:49:14.036 ' 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.036 --rc genhtml_branch_coverage=1 00:49:14.036 --rc genhtml_function_coverage=1 00:49:14.036 --rc genhtml_legend=1 00:49:14.036 --rc geninfo_all_blocks=1 00:49:14.036 --rc geninfo_unexecuted_blocks=1 00:49:14.036 00:49:14.036 ' 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.036 --rc genhtml_branch_coverage=1 00:49:14.036 --rc genhtml_function_coverage=1 00:49:14.036 --rc genhtml_legend=1 00:49:14.036 --rc geninfo_all_blocks=1 00:49:14.036 --rc geninfo_unexecuted_blocks=1 00:49:14.036 00:49:14.036 ' 00:49:14.036 10:00:20 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:49:14.036 10:00:20 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:49:14.036 10:00:20 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:49:14.036 10:00:20 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:49:14.037 10:00:20 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:49:14.037 10:00:20 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:49:14.037 #define SPDK_CONFIG_H 00:49:14.037 #define SPDK_CONFIG_AIO_FSDEV 1 00:49:14.037 #define SPDK_CONFIG_APPS 1 00:49:14.037 #define SPDK_CONFIG_ARCH native 00:49:14.037 #define SPDK_CONFIG_ASAN 1 00:49:14.037 #undef SPDK_CONFIG_AVAHI 00:49:14.037 #undef SPDK_CONFIG_CET 00:49:14.037 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:49:14.037 #define SPDK_CONFIG_COVERAGE 1 00:49:14.037 #define SPDK_CONFIG_CROSS_PREFIX 00:49:14.037 #undef SPDK_CONFIG_CRYPTO 00:49:14.037 #undef SPDK_CONFIG_CRYPTO_MLX5 00:49:14.037 #undef SPDK_CONFIG_CUSTOMOCF 00:49:14.037 #undef SPDK_CONFIG_DAOS 00:49:14.037 #define SPDK_CONFIG_DAOS_DIR 00:49:14.037 #define SPDK_CONFIG_DEBUG 1 00:49:14.037 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:49:14.037 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:49:14.037 #define SPDK_CONFIG_DPDK_INC_DIR 00:49:14.037 #define SPDK_CONFIG_DPDK_LIB_DIR 00:49:14.037 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:49:14.037 #undef SPDK_CONFIG_DPDK_UADK 00:49:14.037 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:49:14.037 #define SPDK_CONFIG_EXAMPLES 1 00:49:14.037 #undef SPDK_CONFIG_FC 00:49:14.037 #define SPDK_CONFIG_FC_PATH 00:49:14.037 #define SPDK_CONFIG_FIO_PLUGIN 1 00:49:14.037 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:49:14.037 #define SPDK_CONFIG_FSDEV 1 00:49:14.037 #undef SPDK_CONFIG_FUSE 00:49:14.037 #undef SPDK_CONFIG_FUZZER 00:49:14.037 #define SPDK_CONFIG_FUZZER_LIB 00:49:14.037 #undef SPDK_CONFIG_GOLANG 00:49:14.037 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:49:14.037 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:49:14.037 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:49:14.037 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:49:14.037 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:49:14.037 #undef SPDK_CONFIG_HAVE_LIBBSD 00:49:14.037 #undef SPDK_CONFIG_HAVE_LZ4 00:49:14.037 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:49:14.037 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:49:14.037 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:49:14.037 #define SPDK_CONFIG_IDXD 1 00:49:14.037 #define SPDK_CONFIG_IDXD_KERNEL 1 00:49:14.037 #undef SPDK_CONFIG_IPSEC_MB 00:49:14.037 #define SPDK_CONFIG_IPSEC_MB_DIR 00:49:14.037 #define SPDK_CONFIG_ISAL 1 00:49:14.037 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:49:14.037 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:49:14.037 #define SPDK_CONFIG_LIBDIR 00:49:14.037 #undef SPDK_CONFIG_LTO 00:49:14.037 #define SPDK_CONFIG_MAX_LCORES 128 00:49:14.037 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:49:14.037 #define SPDK_CONFIG_NVME_CUSE 1 00:49:14.037 #undef SPDK_CONFIG_OCF 00:49:14.037 #define SPDK_CONFIG_OCF_PATH 00:49:14.037 #define SPDK_CONFIG_OPENSSL_PATH 00:49:14.037 #undef SPDK_CONFIG_PGO_CAPTURE 00:49:14.037 #define SPDK_CONFIG_PGO_DIR 00:49:14.037 #undef SPDK_CONFIG_PGO_USE 00:49:14.037 #define SPDK_CONFIG_PREFIX /usr/local 00:49:14.037 #undef SPDK_CONFIG_RAID5F 00:49:14.037 #undef SPDK_CONFIG_RBD 00:49:14.037 #define SPDK_CONFIG_RDMA 1 00:49:14.037 #define SPDK_CONFIG_RDMA_PROV verbs 00:49:14.037 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:49:14.037 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:49:14.037 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:49:14.037 #define SPDK_CONFIG_SHARED 1 00:49:14.037 #undef SPDK_CONFIG_SMA 00:49:14.037 #define SPDK_CONFIG_TESTS 1 00:49:14.037 #undef SPDK_CONFIG_TSAN 00:49:14.037 #define SPDK_CONFIG_UBLK 1 00:49:14.037 #define SPDK_CONFIG_UBSAN 1 00:49:14.037 #undef SPDK_CONFIG_UNIT_TESTS 00:49:14.037 #undef SPDK_CONFIG_URING 00:49:14.037 #define SPDK_CONFIG_URING_PATH 00:49:14.037 #undef SPDK_CONFIG_URING_ZNS 00:49:14.037 #undef SPDK_CONFIG_USDT 00:49:14.037 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:49:14.037 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:49:14.037 #undef SPDK_CONFIG_VFIO_USER 00:49:14.037 #define SPDK_CONFIG_VFIO_USER_DIR 00:49:14.037 #define SPDK_CONFIG_VHOST 1 00:49:14.037 #define SPDK_CONFIG_VIRTIO 1 00:49:14.037 #undef SPDK_CONFIG_VTUNE 00:49:14.037 #define SPDK_CONFIG_VTUNE_DIR 00:49:14.037 #define SPDK_CONFIG_WERROR 1 00:49:14.037 #define SPDK_CONFIG_WPDK_DIR 00:49:14.037 #define SPDK_CONFIG_XNVME 1 00:49:14.037 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:49:14.037 10:00:20 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:49:14.037 10:00:20 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:14.037 10:00:20 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:49:14.037 10:00:20 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:14.037 10:00:20 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:14.037 10:00:20 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:14.037 10:00:20 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.037 10:00:20 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.037 10:00:20 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.037 10:00:20 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:49:14.037 10:00:20 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.037 10:00:20 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@68 -- # uname -s 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:49:14.037 10:00:20 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:49:14.037 10:00:20 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:49:14.037 10:00:20 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:49:14.037 10:00:20 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:49:14.038 10:00:20 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70427 ]] 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70427 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.JQfucM 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.JQfucM/tests/xnvme /tmp/spdk.JQfucM 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13962473472 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5605871616 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13962473472 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5605871616 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:49:14.039 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=92669534208 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=7033245696 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:49:14.040 * Looking for test storage... 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13962473472 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:49:14.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:49:14.040 10:00:20 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:14.040 10:00:21 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:14.040 10:00:21 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:49:14.300 10:00:21 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:49:14.300 10:00:21 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:14.300 10:00:21 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:14.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.300 --rc genhtml_branch_coverage=1 00:49:14.300 --rc genhtml_function_coverage=1 00:49:14.300 --rc genhtml_legend=1 00:49:14.300 --rc geninfo_all_blocks=1 00:49:14.300 --rc geninfo_unexecuted_blocks=1 00:49:14.300 00:49:14.300 ' 00:49:14.300 10:00:21 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:14.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.300 --rc genhtml_branch_coverage=1 00:49:14.300 --rc genhtml_function_coverage=1 00:49:14.300 --rc genhtml_legend=1 00:49:14.300 --rc geninfo_all_blocks=1 00:49:14.300 --rc geninfo_unexecuted_blocks=1 00:49:14.300 00:49:14.300 ' 00:49:14.300 10:00:21 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:14.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.300 --rc genhtml_branch_coverage=1 00:49:14.300 --rc genhtml_function_coverage=1 00:49:14.300 --rc genhtml_legend=1 00:49:14.300 --rc geninfo_all_blocks=1 00:49:14.300 --rc geninfo_unexecuted_blocks=1 00:49:14.300 00:49:14.300 ' 00:49:14.300 10:00:21 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:14.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:14.300 --rc genhtml_branch_coverage=1 00:49:14.300 --rc genhtml_function_coverage=1 00:49:14.300 --rc genhtml_legend=1 00:49:14.300 --rc geninfo_all_blocks=1 00:49:14.300 --rc geninfo_unexecuted_blocks=1 00:49:14.300 00:49:14.300 ' 00:49:14.300 10:00:21 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:14.300 10:00:21 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:14.300 10:00:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.300 10:00:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.300 10:00:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.300 10:00:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:49:14.300 10:00:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:49:14.300 10:00:21 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:14.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:14.818 Waiting for block devices as requested 00:49:14.818 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:49:14.818 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:14.818 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:49:15.077 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:49:20.346 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:49:20.346 10:00:27 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:49:20.604 10:00:27 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:49:20.604 10:00:27 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:49:20.604 10:00:27 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:49:20.604 10:00:27 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:49:20.604 10:00:27 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:49:20.604 10:00:27 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:49:20.604 10:00:27 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:49:20.862 No valid GPT data, bailing 00:49:20.862 10:00:27 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:20.862 10:00:27 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:49:20.862 10:00:27 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:49:20.862 10:00:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:49:20.862 10:00:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:20.862 10:00:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:20.862 10:00:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:49:20.862 ************************************ 00:49:20.862 START TEST xnvme_rpc 00:49:20.862 ************************************ 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:49:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70822 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70822 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70822 ']' 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:20.863 10:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:20.863 [2024-12-09 10:00:27.873849] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:49:20.863 [2024-12-09 10:00:27.874312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70822 ] 00:49:21.122 [2024-12-09 10:00:28.069242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:21.381 [2024-12-09 10:00:28.235095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:22.318 xnvme_bdev 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:22.318 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70822 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70822 ']' 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70822 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70822 00:49:22.576 killing process with pid 70822 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70822' 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70822 00:49:22.576 10:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70822 00:49:25.123 00:49:25.123 real 0m3.990s 00:49:25.123 user 0m4.167s 00:49:25.123 sys 0m0.602s 00:49:25.123 ************************************ 00:49:25.123 END TEST xnvme_rpc 00:49:25.123 ************************************ 00:49:25.123 10:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:25.123 10:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:25.123 10:00:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:49:25.123 10:00:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:25.123 10:00:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:25.123 10:00:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:49:25.123 ************************************ 00:49:25.123 START TEST xnvme_bdevperf 00:49:25.123 ************************************ 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:49:25.123 10:00:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:25.123 { 00:49:25.123 "subsystems": [ 00:49:25.123 { 00:49:25.123 "subsystem": "bdev", 00:49:25.123 "config": [ 00:49:25.123 { 00:49:25.123 "params": { 00:49:25.123 "io_mechanism": "libaio", 00:49:25.123 "conserve_cpu": false, 00:49:25.123 "filename": "/dev/nvme0n1", 00:49:25.123 "name": "xnvme_bdev" 00:49:25.123 }, 00:49:25.123 "method": "bdev_xnvme_create" 00:49:25.123 }, 00:49:25.123 { 00:49:25.123 "method": "bdev_wait_for_examine" 00:49:25.123 } 00:49:25.123 ] 00:49:25.123 } 00:49:25.123 ] 00:49:25.123 } 00:49:25.123 [2024-12-09 10:00:31.881684] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:49:25.123 [2024-12-09 10:00:31.882058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:49:25.123 [2024-12-09 10:00:32.067501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:25.382 [2024-12-09 10:00:32.198066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:25.641 Running I/O for 5 seconds... 00:49:27.953 31515.00 IOPS, 123.11 MiB/s [2024-12-09T10:00:35.933Z] 29992.00 IOPS, 117.16 MiB/s [2024-12-09T10:00:36.868Z] 30611.33 IOPS, 119.58 MiB/s [2024-12-09T10:00:37.820Z] 31179.00 IOPS, 121.79 MiB/s 00:49:30.776 Latency(us) 00:49:30.776 [2024-12-09T10:00:37.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:30.776 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:49:30.776 xnvme_bdev : 5.00 30872.08 120.59 0.00 0.00 2067.59 288.58 5242.88 00:49:30.776 [2024-12-09T10:00:37.820Z] =================================================================================================================== 00:49:30.776 [2024-12-09T10:00:37.820Z] Total : 30872.08 120.59 0.00 0.00 2067.59 288.58 5242.88 00:49:31.711 10:00:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:49:31.711 10:00:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:49:31.711 10:00:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:49:31.711 10:00:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:49:31.711 10:00:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:31.711 { 00:49:31.711 "subsystems": [ 00:49:31.711 { 00:49:31.711 "subsystem": "bdev", 00:49:31.711 "config": [ 00:49:31.711 { 00:49:31.711 "params": { 00:49:31.711 "io_mechanism": "libaio", 00:49:31.711 "conserve_cpu": false, 00:49:31.711 "filename": "/dev/nvme0n1", 00:49:31.711 "name": "xnvme_bdev" 00:49:31.711 }, 00:49:31.711 "method": "bdev_xnvme_create" 00:49:31.711 }, 00:49:31.711 { 00:49:31.711 "method": "bdev_wait_for_examine" 00:49:31.711 } 00:49:31.711 ] 00:49:31.711 } 00:49:31.711 ] 00:49:31.711 } 00:49:31.968 [2024-12-09 10:00:38.790976] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:49:31.968 [2024-12-09 10:00:38.791314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70983 ] 00:49:31.968 [2024-12-09 10:00:38.969158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:32.226 [2024-12-09 10:00:39.101640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:32.484 Running I/O for 5 seconds... 00:49:34.793 27517.00 IOPS, 107.49 MiB/s [2024-12-09T10:00:42.771Z] 27023.50 IOPS, 105.56 MiB/s [2024-12-09T10:00:43.706Z] 27695.00 IOPS, 108.18 MiB/s [2024-12-09T10:00:44.643Z] 27637.25 IOPS, 107.96 MiB/s 00:49:37.599 Latency(us) 00:49:37.599 [2024-12-09T10:00:44.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:37.599 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:49:37.599 xnvme_bdev : 5.00 27671.14 108.09 0.00 0.00 2306.55 238.31 5421.61 00:49:37.599 [2024-12-09T10:00:44.643Z] =================================================================================================================== 00:49:37.599 [2024-12-09T10:00:44.643Z] Total : 27671.14 108.09 0.00 0.00 2306.55 238.31 5421.61 00:49:38.978 00:49:38.978 real 0m13.807s 00:49:38.978 user 0m5.441s 00:49:38.978 sys 0m5.867s 00:49:38.978 ************************************ 00:49:38.978 END TEST xnvme_bdevperf 00:49:38.978 ************************************ 00:49:38.978 10:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:38.978 10:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:38.978 10:00:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:49:38.978 10:00:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:38.978 10:00:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:38.978 10:00:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:49:38.978 ************************************ 00:49:38.978 START TEST xnvme_fio_plugin 00:49:38.978 ************************************ 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:49:38.978 10:00:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:49:38.978 { 00:49:38.978 "subsystems": [ 00:49:38.978 { 00:49:38.978 "subsystem": "bdev", 00:49:38.978 "config": [ 00:49:38.978 { 00:49:38.978 "params": { 00:49:38.978 "io_mechanism": "libaio", 00:49:38.978 "conserve_cpu": false, 00:49:38.978 "filename": "/dev/nvme0n1", 00:49:38.978 "name": "xnvme_bdev" 00:49:38.979 }, 00:49:38.979 "method": "bdev_xnvme_create" 00:49:38.979 }, 00:49:38.979 { 00:49:38.979 "method": "bdev_wait_for_examine" 00:49:38.979 } 00:49:38.979 ] 00:49:38.979 } 00:49:38.979 ] 00:49:38.979 } 00:49:38.979 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:49:38.979 fio-3.35 00:49:38.979 Starting 1 thread 00:49:45.542 00:49:45.542 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71108: Mon Dec 9 10:00:51 2024 00:49:45.542 read: IOPS=25.4k, BW=99.2MiB/s (104MB/s)(496MiB/5001msec) 00:49:45.542 slat (usec): min=5, max=3019, avg=35.14, stdev=28.39 00:49:45.542 clat (usec): min=115, max=7179, avg=1382.89, stdev=741.12 00:49:45.542 lat (usec): min=162, max=7271, avg=1418.03, stdev=743.02 00:49:45.542 clat percentiles (usec): 00:49:45.542 | 1.00th=[ 239], 5.00th=[ 351], 10.00th=[ 457], 20.00th=[ 676], 00:49:45.542 | 30.00th=[ 889], 40.00th=[ 1090], 50.00th=[ 1303], 60.00th=[ 1532], 00:49:45.542 | 70.00th=[ 1778], 80.00th=[ 2040], 90.00th=[ 2376], 95.00th=[ 2606], 00:49:45.542 | 99.00th=[ 3392], 99.50th=[ 3752], 99.90th=[ 4555], 99.95th=[ 4883], 00:49:45.542 | 99.99th=[ 5669] 00:49:45.542 bw ( KiB/s): min=93200, max=115272, per=100.00%, avg=101880.89, stdev=7944.45, samples=9 00:49:45.542 iops : min=23300, max=28818, avg=25470.22, stdev=1986.11, samples=9 00:49:45.542 lat (usec) : 250=1.29%, 500=10.60%, 750=11.68%, 1000=11.74% 00:49:45.542 lat (msec) : 2=43.39%, 4=21.02%, 10=0.28% 00:49:45.542 cpu : usr=25.48%, sys=53.26%, ctx=142, majf=0, minf=626 00:49:45.542 IO depths : 1=0.1%, 2=1.6%, 4=5.5%, 8=12.4%, 16=26.0%, 32=52.7%, >=64=1.7% 00:49:45.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:45.542 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:49:45.542 issued rwts: total=126941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:45.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:45.542 00:49:45.542 Run status group 0 (all jobs): 00:49:45.542 READ: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=496MiB (520MB), run=5001-5001msec 00:49:46.109 ----------------------------------------------------- 00:49:46.109 Suppressions used: 00:49:46.109 count bytes template 00:49:46.109 1 11 /usr/src/fio/parse.c 00:49:46.109 1 8 libtcmalloc_minimal.so 00:49:46.109 1 904 libcrypto.so 00:49:46.109 ----------------------------------------------------- 00:49:46.109 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:49:46.367 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:49:46.368 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:49:46.368 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:49:46.368 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:49:46.368 10:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:49:46.368 { 00:49:46.368 "subsystems": [ 00:49:46.368 { 00:49:46.368 "subsystem": "bdev", 00:49:46.368 "config": [ 00:49:46.368 { 00:49:46.368 "params": { 00:49:46.368 "io_mechanism": "libaio", 00:49:46.368 "conserve_cpu": false, 00:49:46.368 "filename": "/dev/nvme0n1", 00:49:46.368 "name": "xnvme_bdev" 00:49:46.368 }, 00:49:46.368 "method": "bdev_xnvme_create" 00:49:46.368 }, 00:49:46.368 { 00:49:46.368 "method": "bdev_wait_for_examine" 00:49:46.368 } 00:49:46.368 ] 00:49:46.368 } 00:49:46.368 ] 00:49:46.368 } 00:49:46.625 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:49:46.625 fio-3.35 00:49:46.625 Starting 1 thread 00:49:53.200 00:49:53.200 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71208: Mon Dec 9 10:00:59 2024 00:49:53.200 write: IOPS=26.2k, BW=102MiB/s (107MB/s)(512MiB/5001msec); 0 zone resets 00:49:53.200 slat (usec): min=5, max=4608, avg=33.82, stdev=32.74 00:49:53.200 clat (usec): min=95, max=8306, avg=1358.40, stdev=731.59 00:49:53.200 lat (usec): min=147, max=8388, avg=1392.22, stdev=733.84 00:49:53.200 clat percentiles (usec): 00:49:53.200 | 1.00th=[ 255], 5.00th=[ 363], 10.00th=[ 469], 20.00th=[ 676], 00:49:53.200 | 30.00th=[ 873], 40.00th=[ 1074], 50.00th=[ 1270], 60.00th=[ 1467], 00:49:53.200 | 70.00th=[ 1713], 80.00th=[ 1991], 90.00th=[ 2343], 95.00th=[ 2606], 00:49:53.200 | 99.00th=[ 3326], 99.50th=[ 3720], 99.90th=[ 4555], 99.95th=[ 4948], 00:49:53.200 | 99.99th=[ 7504] 00:49:53.200 bw ( KiB/s): min=94080, max=139128, per=100.00%, avg=105355.44, stdev=15029.03, samples=9 00:49:53.200 iops : min=23520, max=34782, avg=26338.78, stdev=3757.29, samples=9 00:49:53.200 lat (usec) : 100=0.01%, 250=0.89%, 500=10.52%, 750=12.16%, 1000=12.82% 00:49:53.200 lat (msec) : 2=43.85%, 4=19.45%, 10=0.31% 00:49:53.200 cpu : usr=27.00%, sys=52.22%, ctx=114, majf=0, minf=689 00:49:53.200 IO depths : 1=0.1%, 2=1.4%, 4=5.1%, 8=12.1%, 16=26.1%, 32=53.5%, >=64=1.7% 00:49:53.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:53.200 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:49:53.200 issued rwts: total=0,131054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:53.200 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:53.200 00:49:53.200 Run status group 0 (all jobs): 00:49:53.200 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=512MiB (537MB), run=5001-5001msec 00:49:53.767 ----------------------------------------------------- 00:49:53.768 Suppressions used: 00:49:53.768 count bytes template 00:49:53.768 1 11 /usr/src/fio/parse.c 00:49:53.768 1 8 libtcmalloc_minimal.so 00:49:53.768 1 904 libcrypto.so 00:49:53.768 ----------------------------------------------------- 00:49:53.768 00:49:53.768 00:49:53.768 real 0m15.074s 00:49:53.768 user 0m6.575s 00:49:53.768 sys 0m6.034s 00:49:53.768 10:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:53.768 10:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:49:53.768 ************************************ 00:49:53.768 END TEST xnvme_fio_plugin 00:49:53.768 ************************************ 00:49:53.768 10:01:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:49:53.768 10:01:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:49:53.768 10:01:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:49:53.768 10:01:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:49:53.768 10:01:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:53.768 10:01:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:53.768 10:01:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:49:53.768 ************************************ 00:49:53.768 START TEST xnvme_rpc 00:49:53.768 ************************************ 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71300 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71300 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71300 ']' 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:53.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:53.768 10:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:54.026 [2024-12-09 10:01:00.892181] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:49:54.026 [2024-12-09 10:01:00.892480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71300 ] 00:49:54.285 [2024-12-09 10:01:01.081550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:54.285 [2024-12-09 10:01:01.238968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:55.221 xnvme_bdev 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:55.221 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71300 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71300 ']' 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71300 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71300 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:55.479 killing process with pid 71300 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71300' 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71300 00:49:55.479 10:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71300 00:49:58.087 00:49:58.087 real 0m3.972s 00:49:58.087 user 0m4.208s 00:49:58.087 sys 0m0.588s 00:49:58.087 10:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:58.087 10:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:49:58.087 ************************************ 00:49:58.087 END TEST xnvme_rpc 00:49:58.087 ************************************ 00:49:58.087 10:01:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:49:58.087 10:01:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:49:58.087 10:01:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:58.087 10:01:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:49:58.087 ************************************ 00:49:58.087 START TEST xnvme_bdevperf 00:49:58.087 ************************************ 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:49:58.087 10:01:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:58.087 { 00:49:58.087 "subsystems": [ 00:49:58.087 { 00:49:58.087 "subsystem": "bdev", 00:49:58.087 "config": [ 00:49:58.087 { 00:49:58.087 "params": { 00:49:58.087 "io_mechanism": "libaio", 00:49:58.087 "conserve_cpu": true, 00:49:58.087 "filename": "/dev/nvme0n1", 00:49:58.087 "name": "xnvme_bdev" 00:49:58.087 }, 00:49:58.087 "method": "bdev_xnvme_create" 00:49:58.087 }, 00:49:58.087 { 00:49:58.087 "method": "bdev_wait_for_examine" 00:49:58.087 } 00:49:58.087 ] 00:49:58.087 } 00:49:58.087 ] 00:49:58.087 } 00:49:58.087 [2024-12-09 10:01:04.895976] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:49:58.087 [2024-12-09 10:01:04.896161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:49:58.087 [2024-12-09 10:01:05.085688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:58.345 [2024-12-09 10:01:05.227079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:58.603 Running I/O for 5 seconds... 00:50:00.914 25450.00 IOPS, 99.41 MiB/s [2024-12-09T10:01:08.945Z] 23745.00 IOPS, 92.75 MiB/s [2024-12-09T10:01:09.881Z] 23853.00 IOPS, 93.18 MiB/s [2024-12-09T10:01:10.838Z] 25003.00 IOPS, 97.67 MiB/s 00:50:03.794 Latency(us) 00:50:03.794 [2024-12-09T10:01:10.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:03.794 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:50:03.794 xnvme_bdev : 5.01 25413.62 99.27 0.00 0.00 2511.95 286.72 8579.26 00:50:03.794 [2024-12-09T10:01:10.838Z] =================================================================================================================== 00:50:03.794 [2024-12-09T10:01:10.838Z] Total : 25413.62 99.27 0.00 0.00 2511.95 286.72 8579.26 00:50:04.728 10:01:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:04.728 10:01:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:50:04.728 10:01:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:50:04.728 10:01:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:50:04.728 10:01:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:04.987 { 00:50:04.987 "subsystems": [ 00:50:04.987 { 00:50:04.987 "subsystem": "bdev", 00:50:04.987 "config": [ 00:50:04.987 { 00:50:04.987 "params": { 00:50:04.987 "io_mechanism": "libaio", 00:50:04.987 "conserve_cpu": true, 00:50:04.987 "filename": "/dev/nvme0n1", 00:50:04.987 "name": "xnvme_bdev" 00:50:04.987 }, 00:50:04.987 "method": "bdev_xnvme_create" 00:50:04.987 }, 00:50:04.987 { 00:50:04.987 "method": "bdev_wait_for_examine" 00:50:04.987 } 00:50:04.987 ] 00:50:04.987 } 00:50:04.987 ] 00:50:04.987 } 00:50:04.987 [2024-12-09 10:01:11.848399] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:50:04.987 [2024-12-09 10:01:11.848581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71462 ] 00:50:04.987 [2024-12-09 10:01:12.029909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:05.244 [2024-12-09 10:01:12.162163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:05.503 Running I/O for 5 seconds... 00:50:07.813 23585.00 IOPS, 92.13 MiB/s [2024-12-09T10:01:15.792Z] 23235.50 IOPS, 90.76 MiB/s [2024-12-09T10:01:16.731Z] 23277.33 IOPS, 90.93 MiB/s [2024-12-09T10:01:17.670Z] 24704.00 IOPS, 96.50 MiB/s 00:50:10.626 Latency(us) 00:50:10.626 [2024-12-09T10:01:17.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:10.626 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:50:10.626 xnvme_bdev : 5.00 24689.53 96.44 0.00 0.00 2585.02 283.00 6345.08 00:50:10.626 [2024-12-09T10:01:17.670Z] =================================================================================================================== 00:50:10.626 [2024-12-09T10:01:17.670Z] Total : 24689.53 96.44 0.00 0.00 2585.02 283.00 6345.08 00:50:12.003 00:50:12.003 real 0m13.873s 00:50:12.003 user 0m5.534s 00:50:12.003 sys 0m5.870s 00:50:12.003 10:01:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:12.003 10:01:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:12.003 ************************************ 00:50:12.003 END TEST xnvme_bdevperf 00:50:12.003 ************************************ 00:50:12.003 10:01:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:50:12.003 10:01:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:50:12.003 10:01:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:12.003 10:01:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:50:12.003 ************************************ 00:50:12.003 START TEST xnvme_fio_plugin 00:50:12.003 ************************************ 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:12.003 10:01:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:12.003 { 00:50:12.003 "subsystems": [ 00:50:12.003 { 00:50:12.003 "subsystem": "bdev", 00:50:12.003 "config": [ 00:50:12.003 { 00:50:12.003 "params": { 00:50:12.003 "io_mechanism": "libaio", 00:50:12.003 "conserve_cpu": true, 00:50:12.003 "filename": "/dev/nvme0n1", 00:50:12.003 "name": "xnvme_bdev" 00:50:12.003 }, 00:50:12.003 "method": "bdev_xnvme_create" 00:50:12.003 }, 00:50:12.003 { 00:50:12.003 "method": "bdev_wait_for_examine" 00:50:12.003 } 00:50:12.003 ] 00:50:12.003 } 00:50:12.003 ] 00:50:12.003 } 00:50:12.003 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:50:12.003 fio-3.35 00:50:12.003 Starting 1 thread 00:50:18.568 00:50:18.568 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71581: Mon Dec 9 10:01:24 2024 00:50:18.568 read: IOPS=24.8k, BW=96.9MiB/s (102MB/s)(485MiB/5001msec) 00:50:18.568 slat (usec): min=5, max=2292, avg=35.71, stdev=31.95 00:50:18.568 clat (usec): min=74, max=6645, avg=1436.09, stdev=805.31 00:50:18.568 lat (usec): min=158, max=6734, avg=1471.80, stdev=808.52 00:50:18.568 clat percentiles (usec): 00:50:18.568 | 1.00th=[ 245], 5.00th=[ 375], 10.00th=[ 498], 20.00th=[ 709], 00:50:18.568 | 30.00th=[ 906], 40.00th=[ 1106], 50.00th=[ 1303], 60.00th=[ 1532], 00:50:18.568 | 70.00th=[ 1795], 80.00th=[ 2114], 90.00th=[ 2540], 95.00th=[ 2900], 00:50:18.568 | 99.00th=[ 3818], 99.50th=[ 4146], 99.90th=[ 4752], 99.95th=[ 4948], 00:50:18.568 | 99.99th=[ 5473] 00:50:18.568 bw ( KiB/s): min=90216, max=109960, per=100.00%, avg=99676.78, stdev=5246.51, samples=9 00:50:18.568 iops : min=22554, max=27490, avg=24919.11, stdev=1311.65, samples=9 00:50:18.568 lat (usec) : 100=0.01%, 250=1.12%, 500=9.03%, 750=11.70%, 1000=13.03% 00:50:18.568 lat (msec) : 2=41.85%, 4=22.57%, 10=0.70% 00:50:18.568 cpu : usr=26.44%, sys=52.44%, ctx=79, majf=0, minf=764 00:50:18.568 IO depths : 1=0.2%, 2=1.6%, 4=5.0%, 8=11.7%, 16=25.6%, 32=54.3%, >=64=1.7% 00:50:18.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:18.568 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:50:18.568 issued rwts: total=124116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:18.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:50:18.568 00:50:18.568 Run status group 0 (all jobs): 00:50:18.568 READ: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=485MiB (508MB), run=5001-5001msec 00:50:19.135 ----------------------------------------------------- 00:50:19.135 Suppressions used: 00:50:19.135 count bytes template 00:50:19.135 1 11 /usr/src/fio/parse.c 00:50:19.135 1 8 libtcmalloc_minimal.so 00:50:19.135 1 904 libcrypto.so 00:50:19.135 ----------------------------------------------------- 00:50:19.135 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:19.393 10:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:19.393 { 00:50:19.393 "subsystems": [ 00:50:19.393 { 00:50:19.393 "subsystem": "bdev", 00:50:19.393 "config": [ 00:50:19.393 { 00:50:19.393 "params": { 00:50:19.393 "io_mechanism": "libaio", 00:50:19.393 "conserve_cpu": true, 00:50:19.393 "filename": "/dev/nvme0n1", 00:50:19.393 "name": "xnvme_bdev" 00:50:19.393 }, 00:50:19.393 "method": "bdev_xnvme_create" 00:50:19.393 }, 00:50:19.393 { 00:50:19.393 "method": "bdev_wait_for_examine" 00:50:19.393 } 00:50:19.393 ] 00:50:19.393 } 00:50:19.393 ] 00:50:19.393 } 00:50:19.652 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:50:19.652 fio-3.35 00:50:19.652 Starting 1 thread 00:50:26.211 00:50:26.211 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71683: Mon Dec 9 10:01:32 2024 00:50:26.211 write: IOPS=23.9k, BW=93.4MiB/s (97.9MB/s)(467MiB/5001msec); 0 zone resets 00:50:26.211 slat (usec): min=5, max=974, avg=37.31, stdev=31.03 00:50:26.211 clat (usec): min=116, max=6149, avg=1476.53, stdev=851.09 00:50:26.211 lat (usec): min=166, max=6195, avg=1513.83, stdev=854.98 00:50:26.211 clat percentiles (usec): 00:50:26.211 | 1.00th=[ 239], 5.00th=[ 367], 10.00th=[ 494], 20.00th=[ 709], 00:50:26.211 | 30.00th=[ 914], 40.00th=[ 1106], 50.00th=[ 1319], 60.00th=[ 1565], 00:50:26.211 | 70.00th=[ 1844], 80.00th=[ 2212], 90.00th=[ 2671], 95.00th=[ 3064], 00:50:26.211 | 99.00th=[ 3851], 99.50th=[ 4178], 99.90th=[ 4752], 99.95th=[ 5014], 00:50:26.211 | 99.99th=[ 5669] 00:50:26.211 bw ( KiB/s): min=84320, max=110576, per=100.00%, avg=95859.56, stdev=8436.54, samples=9 00:50:26.211 iops : min=21080, max=27644, avg=23964.89, stdev=2109.14, samples=9 00:50:26.211 lat (usec) : 250=1.25%, 500=9.07%, 750=11.58%, 1000=12.85% 00:50:26.211 lat (msec) : 2=39.95%, 4=24.57%, 10=0.73% 00:50:26.211 cpu : usr=26.54%, sys=52.36%, ctx=138, majf=0, minf=765 00:50:26.211 IO depths : 1=0.1%, 2=1.7%, 4=5.0%, 8=11.4%, 16=25.6%, 32=54.4%, >=64=1.7% 00:50:26.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:26.211 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:50:26.211 issued rwts: total=0,119514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:26.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:50:26.211 00:50:26.211 Run status group 0 (all jobs): 00:50:26.211 WRITE: bw=93.4MiB/s (97.9MB/s), 93.4MiB/s-93.4MiB/s (97.9MB/s-97.9MB/s), io=467MiB (490MB), run=5001-5001msec 00:50:26.778 ----------------------------------------------------- 00:50:26.778 Suppressions used: 00:50:26.778 count bytes template 00:50:26.778 1 11 /usr/src/fio/parse.c 00:50:26.778 1 8 libtcmalloc_minimal.so 00:50:26.778 1 904 libcrypto.so 00:50:26.778 ----------------------------------------------------- 00:50:26.778 00:50:26.778 00:50:26.778 real 0m15.007s 00:50:26.778 user 0m6.549s 00:50:26.778 sys 0m6.008s 00:50:26.778 10:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:26.778 10:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:50:26.778 ************************************ 00:50:26.778 END TEST xnvme_fio_plugin 00:50:26.778 ************************************ 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:50:26.778 10:01:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:50:26.778 10:01:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:50:26.778 10:01:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:26.778 10:01:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:50:26.778 ************************************ 00:50:26.778 START TEST xnvme_rpc 00:50:26.778 ************************************ 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71765 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71765 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71765 ']' 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:26.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:26.778 10:01:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:27.037 [2024-12-09 10:01:33.895279] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:50:27.037 [2024-12-09 10:01:33.895528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71765 ] 00:50:27.296 [2024-12-09 10:01:34.087132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:27.296 [2024-12-09 10:01:34.245862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:28.236 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:28.236 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:50:28.236 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:50:28.236 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.236 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:28.236 xnvme_bdev 00:50:28.236 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.237 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71765 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71765 ']' 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71765 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71765 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71765' 00:50:28.497 killing process with pid 71765 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71765 00:50:28.497 10:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71765 00:50:31.030 00:50:31.030 real 0m3.909s 00:50:31.030 user 0m4.054s 00:50:31.030 sys 0m0.586s 00:50:31.030 10:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:31.030 10:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:50:31.030 ************************************ 00:50:31.030 END TEST xnvme_rpc 00:50:31.030 ************************************ 00:50:31.030 10:01:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:50:31.030 10:01:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:50:31.030 10:01:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:31.030 10:01:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:50:31.030 ************************************ 00:50:31.030 START TEST xnvme_bdevperf 00:50:31.030 ************************************ 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:50:31.030 10:01:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:31.030 { 00:50:31.030 "subsystems": [ 00:50:31.030 { 00:50:31.030 "subsystem": "bdev", 00:50:31.030 "config": [ 00:50:31.030 { 00:50:31.030 "params": { 00:50:31.030 "io_mechanism": "io_uring", 00:50:31.030 "conserve_cpu": false, 00:50:31.030 "filename": "/dev/nvme0n1", 00:50:31.030 "name": "xnvme_bdev" 00:50:31.030 }, 00:50:31.030 "method": "bdev_xnvme_create" 00:50:31.030 }, 00:50:31.030 { 00:50:31.030 "method": "bdev_wait_for_examine" 00:50:31.030 } 00:50:31.030 ] 00:50:31.030 } 00:50:31.030 ] 00:50:31.030 } 00:50:31.030 [2024-12-09 10:01:37.835047] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:50:31.030 [2024-12-09 10:01:37.835286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71851 ] 00:50:31.030 [2024-12-09 10:01:38.022546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:31.289 [2024-12-09 10:01:38.144963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:31.548 Running I/O for 5 seconds... 00:50:33.860 47551.00 IOPS, 185.75 MiB/s [2024-12-09T10:01:41.840Z] 46578.50 IOPS, 181.95 MiB/s [2024-12-09T10:01:42.776Z] 47586.67 IOPS, 185.89 MiB/s [2024-12-09T10:01:43.712Z] 47523.25 IOPS, 185.64 MiB/s 00:50:36.668 Latency(us) 00:50:36.668 [2024-12-09T10:01:43.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:36.668 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:50:36.668 xnvme_bdev : 5.00 47177.77 184.29 0.00 0.00 1352.19 404.01 9234.62 00:50:36.668 [2024-12-09T10:01:43.712Z] =================================================================================================================== 00:50:36.668 [2024-12-09T10:01:43.712Z] Total : 47177.77 184.29 0.00 0.00 1352.19 404.01 9234.62 00:50:38.108 10:01:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:38.108 10:01:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:50:38.108 10:01:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:50:38.108 10:01:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:50:38.108 10:01:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:38.108 { 00:50:38.108 "subsystems": [ 00:50:38.108 { 00:50:38.108 "subsystem": "bdev", 00:50:38.108 "config": [ 00:50:38.108 { 00:50:38.108 "params": { 00:50:38.108 "io_mechanism": "io_uring", 00:50:38.108 "conserve_cpu": false, 00:50:38.108 "filename": "/dev/nvme0n1", 00:50:38.108 "name": "xnvme_bdev" 00:50:38.108 }, 00:50:38.108 "method": "bdev_xnvme_create" 00:50:38.108 }, 00:50:38.108 { 00:50:38.108 "method": "bdev_wait_for_examine" 00:50:38.108 } 00:50:38.108 ] 00:50:38.108 } 00:50:38.108 ] 00:50:38.108 } 00:50:38.108 [2024-12-09 10:01:44.776132] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:50:38.108 [2024-12-09 10:01:44.776370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71932 ] 00:50:38.108 [2024-12-09 10:01:44.969887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:38.108 [2024-12-09 10:01:45.130796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:38.678 Running I/O for 5 seconds... 00:50:40.550 41088.00 IOPS, 160.50 MiB/s [2024-12-09T10:01:48.530Z] 39873.00 IOPS, 155.75 MiB/s [2024-12-09T10:01:49.908Z] 39723.33 IOPS, 155.17 MiB/s [2024-12-09T10:01:50.842Z] 39552.50 IOPS, 154.50 MiB/s 00:50:43.798 Latency(us) 00:50:43.798 [2024-12-09T10:01:50.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:43.798 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:50:43.798 xnvme_bdev : 5.00 39480.57 154.22 0.00 0.00 1615.33 997.93 3991.74 00:50:43.798 [2024-12-09T10:01:50.842Z] =================================================================================================================== 00:50:43.798 [2024-12-09T10:01:50.842Z] Total : 39480.57 154.22 0.00 0.00 1615.33 997.93 3991.74 00:50:44.733 00:50:44.733 real 0m13.931s 00:50:44.733 user 0m7.041s 00:50:44.733 sys 0m6.664s 00:50:44.733 10:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:44.733 10:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:44.733 ************************************ 00:50:44.733 END TEST xnvme_bdevperf 00:50:44.733 ************************************ 00:50:44.733 10:01:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:50:44.733 10:01:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:50:44.733 10:01:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:44.733 10:01:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:50:44.733 ************************************ 00:50:44.733 START TEST xnvme_fio_plugin 00:50:44.733 ************************************ 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:44.733 10:01:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:44.733 { 00:50:44.733 "subsystems": [ 00:50:44.733 { 00:50:44.733 "subsystem": "bdev", 00:50:44.733 "config": [ 00:50:44.733 { 00:50:44.733 "params": { 00:50:44.733 "io_mechanism": "io_uring", 00:50:44.733 "conserve_cpu": false, 00:50:44.733 "filename": "/dev/nvme0n1", 00:50:44.733 "name": "xnvme_bdev" 00:50:44.733 }, 00:50:44.733 "method": "bdev_xnvme_create" 00:50:44.733 }, 00:50:44.733 { 00:50:44.733 "method": "bdev_wait_for_examine" 00:50:44.733 } 00:50:44.733 ] 00:50:44.733 } 00:50:44.733 ] 00:50:44.733 } 00:50:44.992 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:50:44.992 fio-3.35 00:50:44.992 Starting 1 thread 00:50:51.554 00:50:51.554 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72056: Mon Dec 9 10:01:57 2024 00:50:51.554 read: IOPS=44.1k, BW=172MiB/s (181MB/s)(862MiB/5001msec) 00:50:51.554 slat (usec): min=2, max=110, avg= 4.62, stdev= 2.07 00:50:51.554 clat (usec): min=194, max=5244, avg=1266.46, stdev=243.58 00:50:51.554 lat (usec): min=198, max=5248, avg=1271.08, stdev=244.08 00:50:51.554 clat percentiles (usec): 00:50:51.554 | 1.00th=[ 930], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:50:51.554 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1270], 00:50:51.554 | 70.00th=[ 1319], 80.00th=[ 1385], 90.00th=[ 1532], 95.00th=[ 1663], 00:50:51.554 | 99.00th=[ 1975], 99.50th=[ 2376], 99.90th=[ 3720], 99.95th=[ 4113], 00:50:51.554 | 99.99th=[ 4948] 00:50:51.554 bw ( KiB/s): min=160256, max=190464, per=99.80%, avg=176192.89, stdev=8598.31, samples=9 00:50:51.554 iops : min=40064, max=47616, avg=44048.44, stdev=2149.23, samples=9 00:50:51.554 lat (usec) : 250=0.01%, 500=0.05%, 750=0.13%, 1000=3.85% 00:50:51.554 lat (msec) : 2=95.01%, 4=0.89%, 10=0.06% 00:50:51.554 cpu : usr=37.68%, sys=61.16%, ctx=10, majf=0, minf=762 00:50:51.554 IO depths : 1=1.4%, 2=2.8%, 4=6.0%, 8=12.4%, 16=25.1%, 32=50.7%, >=64=1.6% 00:50:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:51.554 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:50:51.554 issued rwts: total=220728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:51.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:50:51.554 00:50:51.554 Run status group 0 (all jobs): 00:50:51.554 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=862MiB (904MB), run=5001-5001msec 00:50:52.491 ----------------------------------------------------- 00:50:52.491 Suppressions used: 00:50:52.491 count bytes template 00:50:52.491 1 11 /usr/src/fio/parse.c 00:50:52.491 1 8 libtcmalloc_minimal.so 00:50:52.491 1 904 libcrypto.so 00:50:52.491 ----------------------------------------------------- 00:50:52.491 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:52.491 10:01:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:50:52.491 { 00:50:52.491 "subsystems": [ 00:50:52.491 { 00:50:52.491 "subsystem": "bdev", 00:50:52.491 "config": [ 00:50:52.491 { 00:50:52.491 "params": { 00:50:52.491 "io_mechanism": "io_uring", 00:50:52.491 "conserve_cpu": false, 00:50:52.491 "filename": "/dev/nvme0n1", 00:50:52.491 "name": "xnvme_bdev" 00:50:52.491 }, 00:50:52.491 "method": "bdev_xnvme_create" 00:50:52.491 }, 00:50:52.491 { 00:50:52.491 "method": "bdev_wait_for_examine" 00:50:52.491 } 00:50:52.491 ] 00:50:52.491 } 00:50:52.491 ] 00:50:52.491 } 00:50:52.759 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:50:52.759 fio-3.35 00:50:52.759 Starting 1 thread 00:50:59.321 00:50:59.321 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72149: Mon Dec 9 10:02:05 2024 00:50:59.321 write: IOPS=43.3k, BW=169MiB/s (177MB/s)(845MiB/5001msec); 0 zone resets 00:50:59.321 slat (usec): min=2, max=304, avg= 4.96, stdev= 2.61 00:50:59.321 clat (usec): min=171, max=3626, avg=1282.57, stdev=193.54 00:50:59.321 lat (usec): min=175, max=3635, avg=1287.53, stdev=194.43 00:50:59.321 clat percentiles (usec): 00:50:59.321 | 1.00th=[ 988], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1139], 00:50:59.321 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1254], 60.00th=[ 1287], 00:50:59.321 | 70.00th=[ 1336], 80.00th=[ 1401], 90.00th=[ 1516], 95.00th=[ 1663], 00:50:59.321 | 99.00th=[ 1893], 99.50th=[ 1958], 99.90th=[ 2442], 99.95th=[ 2769], 00:50:59.321 | 99.99th=[ 3523] 00:50:59.321 bw ( KiB/s): min=161280, max=181504, per=100.00%, avg=173768.00, stdev=7130.96, samples=9 00:50:59.321 iops : min=40320, max=45376, avg=43442.00, stdev=1783.09, samples=9 00:50:59.321 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=1.49% 00:50:59.321 lat (msec) : 2=98.11%, 4=0.37% 00:50:59.321 cpu : usr=39.82%, sys=59.04%, ctx=51, majf=0, minf=763 00:50:59.321 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:50:59.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:59.321 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:50:59.321 issued rwts: total=0,216432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:59.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:50:59.321 00:50:59.321 Run status group 0 (all jobs): 00:50:59.321 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=845MiB (887MB), run=5001-5001msec 00:50:59.888 ----------------------------------------------------- 00:50:59.888 Suppressions used: 00:50:59.888 count bytes template 00:50:59.888 1 11 /usr/src/fio/parse.c 00:50:59.888 1 8 libtcmalloc_minimal.so 00:50:59.888 1 904 libcrypto.so 00:50:59.888 ----------------------------------------------------- 00:50:59.888 00:50:59.888 00:50:59.888 real 0m15.176s 00:50:59.888 user 0m7.962s 00:50:59.888 sys 0m6.816s 00:50:59.888 10:02:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:59.888 10:02:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:50:59.888 ************************************ 00:50:59.888 END TEST xnvme_fio_plugin 00:50:59.888 ************************************ 00:50:59.888 10:02:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:50:59.888 10:02:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:50:59.888 10:02:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:50:59.889 10:02:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:50:59.889 10:02:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:50:59.889 10:02:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:59.889 10:02:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:50:59.889 ************************************ 00:50:59.889 START TEST xnvme_rpc 00:50:59.889 ************************************ 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72241 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72241 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72241 ']' 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:59.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:59.889 10:02:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:00.147 [2024-12-09 10:02:07.064496] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:00.147 [2024-12-09 10:02:07.064692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72241 ] 00:51:00.405 [2024-12-09 10:02:07.258540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:00.405 [2024-12-09 10:02:07.415992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:01.427 xnvme_bdev 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.427 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72241 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72241 ']' 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72241 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72241 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:01.685 killing process with pid 72241 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72241' 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72241 00:51:01.685 10:02:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72241 00:51:04.215 00:51:04.215 real 0m4.004s 00:51:04.215 user 0m4.148s 00:51:04.215 sys 0m0.569s 00:51:04.215 10:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:04.215 ************************************ 00:51:04.215 END TEST xnvme_rpc 00:51:04.215 10:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:04.215 ************************************ 00:51:04.215 10:02:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:51:04.215 10:02:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:04.215 10:02:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:04.215 10:02:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:51:04.215 ************************************ 00:51:04.215 START TEST xnvme_bdevperf 00:51:04.215 ************************************ 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:51:04.215 10:02:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:04.215 { 00:51:04.215 "subsystems": [ 00:51:04.215 { 00:51:04.215 "subsystem": "bdev", 00:51:04.215 "config": [ 00:51:04.215 { 00:51:04.215 "params": { 00:51:04.215 "io_mechanism": "io_uring", 00:51:04.215 "conserve_cpu": true, 00:51:04.215 "filename": "/dev/nvme0n1", 00:51:04.215 "name": "xnvme_bdev" 00:51:04.215 }, 00:51:04.215 "method": "bdev_xnvme_create" 00:51:04.215 }, 00:51:04.215 { 00:51:04.215 "method": "bdev_wait_for_examine" 00:51:04.215 } 00:51:04.215 ] 00:51:04.215 } 00:51:04.215 ] 00:51:04.215 } 00:51:04.215 [2024-12-09 10:02:11.081870] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:04.215 [2024-12-09 10:02:11.082075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72318 ] 00:51:04.215 [2024-12-09 10:02:11.254619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:04.473 [2024-12-09 10:02:11.393979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:04.730 Running I/O for 5 seconds... 00:51:07.040 43904.00 IOPS, 171.50 MiB/s [2024-12-09T10:02:15.020Z] 45728.00 IOPS, 178.62 MiB/s [2024-12-09T10:02:15.955Z] 46279.00 IOPS, 180.78 MiB/s [2024-12-09T10:02:16.891Z] 46705.25 IOPS, 182.44 MiB/s [2024-12-09T10:02:16.891Z] 46688.80 IOPS, 182.38 MiB/s 00:51:09.847 Latency(us) 00:51:09.847 [2024-12-09T10:02:16.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:09.847 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:51:09.847 xnvme_bdev : 5.00 46666.88 182.29 0.00 0.00 1366.78 670.25 8698.41 00:51:09.847 [2024-12-09T10:02:16.891Z] =================================================================================================================== 00:51:09.847 [2024-12-09T10:02:16.891Z] Total : 46666.88 182.29 0.00 0.00 1366.78 670.25 8698.41 00:51:11.226 10:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:11.226 10:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:51:11.226 10:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:51:11.226 10:02:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:51:11.226 10:02:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:11.226 { 00:51:11.226 "subsystems": [ 00:51:11.226 { 00:51:11.226 "subsystem": "bdev", 00:51:11.226 "config": [ 00:51:11.226 { 00:51:11.226 "params": { 00:51:11.226 "io_mechanism": "io_uring", 00:51:11.226 "conserve_cpu": true, 00:51:11.226 "filename": "/dev/nvme0n1", 00:51:11.226 "name": "xnvme_bdev" 00:51:11.226 }, 00:51:11.226 "method": "bdev_xnvme_create" 00:51:11.226 }, 00:51:11.226 { 00:51:11.226 "method": "bdev_wait_for_examine" 00:51:11.226 } 00:51:11.226 ] 00:51:11.226 } 00:51:11.226 ] 00:51:11.226 } 00:51:11.226 [2024-12-09 10:02:17.992563] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:11.226 [2024-12-09 10:02:17.992748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72399 ] 00:51:11.226 [2024-12-09 10:02:18.177494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:11.485 [2024-12-09 10:02:18.310709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:11.744 Running I/O for 5 seconds... 00:51:14.056 44544.00 IOPS, 174.00 MiB/s [2024-12-09T10:02:22.036Z] 45408.00 IOPS, 177.38 MiB/s [2024-12-09T10:02:22.972Z] 45760.00 IOPS, 178.75 MiB/s [2024-12-09T10:02:23.908Z] 45376.00 IOPS, 177.25 MiB/s 00:51:16.864 Latency(us) 00:51:16.864 [2024-12-09T10:02:23.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:16.864 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:51:16.864 xnvme_bdev : 5.00 44461.45 173.68 0.00 0.00 1434.64 837.82 5153.51 00:51:16.864 [2024-12-09T10:02:23.908Z] =================================================================================================================== 00:51:16.864 [2024-12-09T10:02:23.908Z] Total : 44461.45 173.68 0.00 0.00 1434.64 837.82 5153.51 00:51:17.799 00:51:17.799 real 0m13.828s 00:51:17.799 user 0m8.538s 00:51:17.799 sys 0m4.757s 00:51:17.799 10:02:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:17.799 10:02:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:17.799 ************************************ 00:51:17.799 END TEST xnvme_bdevperf 00:51:17.799 ************************************ 00:51:18.058 10:02:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:51:18.058 10:02:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:18.058 10:02:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:18.058 10:02:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:51:18.058 ************************************ 00:51:18.058 START TEST xnvme_fio_plugin 00:51:18.058 ************************************ 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:51:18.058 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:18.059 10:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:51:18.059 { 00:51:18.059 "subsystems": [ 00:51:18.059 { 00:51:18.059 "subsystem": "bdev", 00:51:18.059 "config": [ 00:51:18.059 { 00:51:18.059 "params": { 00:51:18.059 "io_mechanism": "io_uring", 00:51:18.059 "conserve_cpu": true, 00:51:18.059 "filename": "/dev/nvme0n1", 00:51:18.059 "name": "xnvme_bdev" 00:51:18.059 }, 00:51:18.059 "method": "bdev_xnvme_create" 00:51:18.059 }, 00:51:18.059 { 00:51:18.059 "method": "bdev_wait_for_examine" 00:51:18.059 } 00:51:18.059 ] 00:51:18.059 } 00:51:18.059 ] 00:51:18.059 } 00:51:18.315 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:51:18.315 fio-3.35 00:51:18.315 Starting 1 thread 00:51:24.873 00:51:24.873 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72524: Mon Dec 9 10:02:30 2024 00:51:24.873 read: IOPS=45.7k, BW=178MiB/s (187MB/s)(892MiB/5001msec) 00:51:24.873 slat (usec): min=2, max=138, avg= 4.50, stdev= 1.98 00:51:24.873 clat (usec): min=803, max=2896, avg=1220.77, stdev=200.05 00:51:24.873 lat (usec): min=806, max=2904, avg=1225.27, stdev=200.73 00:51:24.873 clat percentiles (usec): 00:51:24.873 | 1.00th=[ 914], 5.00th=[ 971], 10.00th=[ 1004], 20.00th=[ 1057], 00:51:24.873 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1188], 60.00th=[ 1221], 00:51:24.873 | 70.00th=[ 1270], 80.00th=[ 1352], 90.00th=[ 1516], 95.00th=[ 1631], 00:51:24.873 | 99.00th=[ 1811], 99.50th=[ 1893], 99.90th=[ 2089], 99.95th=[ 2180], 00:51:24.873 | 99.99th=[ 2802] 00:51:24.873 bw ( KiB/s): min=162816, max=196608, per=99.56%, avg=181889.33, stdev=12446.81, samples=9 00:51:24.873 iops : min=40704, max=49152, avg=45472.33, stdev=3111.70, samples=9 00:51:24.873 lat (usec) : 1000=8.95% 00:51:24.873 lat (msec) : 2=90.82%, 4=0.23% 00:51:24.873 cpu : usr=55.34%, sys=40.50%, ctx=55, majf=0, minf=762 00:51:24.873 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:51:24.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:24.873 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:51:24.873 issued rwts: total=228416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:24.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:51:24.873 00:51:24.873 Run status group 0 (all jobs): 00:51:24.873 READ: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=892MiB (936MB), run=5001-5001msec 00:51:25.440 ----------------------------------------------------- 00:51:25.440 Suppressions used: 00:51:25.440 count bytes template 00:51:25.440 1 11 /usr/src/fio/parse.c 00:51:25.440 1 8 libtcmalloc_minimal.so 00:51:25.440 1 904 libcrypto.so 00:51:25.440 ----------------------------------------------------- 00:51:25.440 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:25.440 10:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:51:25.440 { 00:51:25.440 "subsystems": [ 00:51:25.440 { 00:51:25.440 "subsystem": "bdev", 00:51:25.440 "config": [ 00:51:25.440 { 00:51:25.440 "params": { 00:51:25.440 "io_mechanism": "io_uring", 00:51:25.440 "conserve_cpu": true, 00:51:25.440 "filename": "/dev/nvme0n1", 00:51:25.440 "name": "xnvme_bdev" 00:51:25.440 }, 00:51:25.440 "method": "bdev_xnvme_create" 00:51:25.440 }, 00:51:25.440 { 00:51:25.440 "method": "bdev_wait_for_examine" 00:51:25.440 } 00:51:25.440 ] 00:51:25.440 } 00:51:25.440 ] 00:51:25.440 } 00:51:25.699 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:51:25.699 fio-3.35 00:51:25.699 Starting 1 thread 00:51:32.335 00:51:32.335 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72622: Mon Dec 9 10:02:38 2024 00:51:32.335 write: IOPS=43.3k, BW=169MiB/s (177MB/s)(846MiB/5001msec); 0 zone resets 00:51:32.335 slat (nsec): min=3017, max=99616, avg=5118.65, stdev=2471.87 00:51:32.335 clat (usec): min=775, max=3035, avg=1272.64, stdev=227.43 00:51:32.335 lat (usec): min=779, max=3039, avg=1277.76, stdev=228.50 00:51:32.335 clat percentiles (usec): 00:51:32.335 | 1.00th=[ 938], 5.00th=[ 996], 10.00th=[ 1037], 20.00th=[ 1090], 00:51:32.335 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1270], 00:51:32.335 | 70.00th=[ 1336], 80.00th=[ 1418], 90.00th=[ 1582], 95.00th=[ 1729], 00:51:32.335 | 99.00th=[ 2040], 99.50th=[ 2147], 99.90th=[ 2343], 99.95th=[ 2442], 00:51:32.335 | 99.99th=[ 2671] 00:51:32.335 bw ( KiB/s): min=163328, max=187392, per=100.00%, avg=177259.56, stdev=8207.96, samples=9 00:51:32.335 iops : min=40832, max=46848, avg=44314.89, stdev=2051.99, samples=9 00:51:32.335 lat (usec) : 1000=5.31% 00:51:32.335 lat (msec) : 2=93.46%, 4=1.23% 00:51:32.335 cpu : usr=54.24%, sys=41.76%, ctx=39, majf=0, minf=763 00:51:32.335 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:51:32.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:32.335 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:51:32.335 issued rwts: total=0,216633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:32.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:51:32.335 00:51:32.335 Run status group 0 (all jobs): 00:51:32.335 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=846MiB (887MB), run=5001-5001msec 00:51:33.270 ----------------------------------------------------- 00:51:33.270 Suppressions used: 00:51:33.270 count bytes template 00:51:33.270 1 11 /usr/src/fio/parse.c 00:51:33.270 1 8 libtcmalloc_minimal.so 00:51:33.270 1 904 libcrypto.so 00:51:33.270 ----------------------------------------------------- 00:51:33.270 00:51:33.270 ************************************ 00:51:33.270 END TEST xnvme_fio_plugin 00:51:33.270 ************************************ 00:51:33.270 00:51:33.270 real 0m15.184s 00:51:33.270 user 0m9.594s 00:51:33.270 sys 0m4.877s 00:51:33.270 10:02:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:33.270 10:02:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:51:33.270 10:02:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:51:33.270 10:02:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:33.270 10:02:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:33.270 10:02:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:51:33.270 ************************************ 00:51:33.270 START TEST xnvme_rpc 00:51:33.270 ************************************ 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72715 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72715 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72715 ']' 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:33.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:33.271 10:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:33.271 [2024-12-09 10:02:40.253550] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:33.271 [2024-12-09 10:02:40.253746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72715 ] 00:51:33.529 [2024-12-09 10:02:40.438664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:33.787 [2024-12-09 10:02:40.628727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:34.722 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:34.723 xnvme_bdev 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:34.723 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72715 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72715 ']' 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72715 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72715 00:51:34.981 killing process with pid 72715 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72715' 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72715 00:51:34.981 10:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72715 00:51:37.510 00:51:37.510 real 0m4.146s 00:51:37.510 user 0m4.435s 00:51:37.510 sys 0m0.611s 00:51:37.510 10:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:37.510 10:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:37.510 ************************************ 00:51:37.510 END TEST xnvme_rpc 00:51:37.510 ************************************ 00:51:37.510 10:02:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:51:37.510 10:02:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:37.510 10:02:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:37.510 10:02:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:51:37.510 ************************************ 00:51:37.510 START TEST xnvme_bdevperf 00:51:37.510 ************************************ 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:51:37.510 10:02:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:37.510 { 00:51:37.510 "subsystems": [ 00:51:37.510 { 00:51:37.510 "subsystem": "bdev", 00:51:37.510 "config": [ 00:51:37.510 { 00:51:37.510 "params": { 00:51:37.510 "io_mechanism": "io_uring_cmd", 00:51:37.510 "conserve_cpu": false, 00:51:37.510 "filename": "/dev/ng0n1", 00:51:37.510 "name": "xnvme_bdev" 00:51:37.510 }, 00:51:37.510 "method": "bdev_xnvme_create" 00:51:37.510 }, 00:51:37.510 { 00:51:37.510 "method": "bdev_wait_for_examine" 00:51:37.510 } 00:51:37.510 ] 00:51:37.510 } 00:51:37.510 ] 00:51:37.510 } 00:51:37.510 [2024-12-09 10:02:44.403006] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:37.510 [2024-12-09 10:02:44.403179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72796 ] 00:51:37.768 [2024-12-09 10:02:44.581689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:37.768 [2024-12-09 10:02:44.713556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:38.026 Running I/O for 5 seconds... 00:51:40.335 50368.00 IOPS, 196.75 MiB/s [2024-12-09T10:02:48.315Z] 49056.00 IOPS, 191.62 MiB/s [2024-12-09T10:02:49.250Z] 48960.00 IOPS, 191.25 MiB/s [2024-12-09T10:02:50.246Z] 48672.00 IOPS, 190.12 MiB/s 00:51:43.202 Latency(us) 00:51:43.202 [2024-12-09T10:02:50.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:43.202 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:51:43.202 xnvme_bdev : 5.00 48582.78 189.78 0.00 0.00 1313.10 826.65 3738.53 00:51:43.202 [2024-12-09T10:02:50.246Z] =================================================================================================================== 00:51:43.202 [2024-12-09T10:02:50.246Z] Total : 48582.78 189.78 0.00 0.00 1313.10 826.65 3738.53 00:51:44.138 10:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:44.138 10:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:51:44.138 10:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:51:44.138 10:02:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:51:44.138 10:02:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:44.397 { 00:51:44.397 "subsystems": [ 00:51:44.397 { 00:51:44.397 "subsystem": "bdev", 00:51:44.397 "config": [ 00:51:44.397 { 00:51:44.397 "params": { 00:51:44.397 "io_mechanism": "io_uring_cmd", 00:51:44.397 "conserve_cpu": false, 00:51:44.397 "filename": "/dev/ng0n1", 00:51:44.397 "name": "xnvme_bdev" 00:51:44.397 }, 00:51:44.397 "method": "bdev_xnvme_create" 00:51:44.397 }, 00:51:44.397 { 00:51:44.397 "method": "bdev_wait_for_examine" 00:51:44.397 } 00:51:44.397 ] 00:51:44.397 } 00:51:44.397 ] 00:51:44.397 } 00:51:44.397 [2024-12-09 10:02:51.239641] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:44.397 [2024-12-09 10:02:51.239846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72879 ] 00:51:44.397 [2024-12-09 10:02:51.429565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:44.655 [2024-12-09 10:02:51.586365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:45.222 Running I/O for 5 seconds... 00:51:47.093 43776.00 IOPS, 171.00 MiB/s [2024-12-09T10:02:55.071Z] 42944.00 IOPS, 167.75 MiB/s [2024-12-09T10:02:56.009Z] 42560.00 IOPS, 166.25 MiB/s [2024-12-09T10:02:57.384Z] 42511.50 IOPS, 166.06 MiB/s [2024-12-09T10:02:57.384Z] 42073.20 IOPS, 164.35 MiB/s 00:51:50.340 Latency(us) 00:51:50.340 [2024-12-09T10:02:57.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:50.340 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:51:50.340 xnvme_bdev : 5.00 42050.69 164.26 0.00 0.00 1516.89 889.95 4230.05 00:51:50.340 [2024-12-09T10:02:57.384Z] =================================================================================================================== 00:51:50.340 [2024-12-09T10:02:57.384Z] Total : 42050.69 164.26 0.00 0.00 1516.89 889.95 4230.05 00:51:51.274 10:02:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:51.274 10:02:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:51:51.274 10:02:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:51:51.274 10:02:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:51:51.274 10:02:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:51.274 { 00:51:51.274 "subsystems": [ 00:51:51.274 { 00:51:51.274 "subsystem": "bdev", 00:51:51.274 "config": [ 00:51:51.274 { 00:51:51.274 "params": { 00:51:51.274 "io_mechanism": "io_uring_cmd", 00:51:51.274 "conserve_cpu": false, 00:51:51.274 "filename": "/dev/ng0n1", 00:51:51.274 "name": "xnvme_bdev" 00:51:51.274 }, 00:51:51.274 "method": "bdev_xnvme_create" 00:51:51.274 }, 00:51:51.274 { 00:51:51.274 "method": "bdev_wait_for_examine" 00:51:51.274 } 00:51:51.274 ] 00:51:51.274 } 00:51:51.274 ] 00:51:51.274 } 00:51:51.274 [2024-12-09 10:02:58.238731] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:51.274 [2024-12-09 10:02:58.238958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72961 ] 00:51:51.532 [2024-12-09 10:02:58.421683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:51.532 [2024-12-09 10:02:58.554074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:52.098 Running I/O for 5 seconds... 00:51:53.968 71936.00 IOPS, 281.00 MiB/s [2024-12-09T10:03:01.947Z] 71488.00 IOPS, 279.25 MiB/s [2024-12-09T10:03:03.321Z] 71317.33 IOPS, 278.58 MiB/s [2024-12-09T10:03:04.256Z] 71040.00 IOPS, 277.50 MiB/s [2024-12-09T10:03:04.256Z] 70438.40 IOPS, 275.15 MiB/s 00:51:57.212 Latency(us) 00:51:57.212 [2024-12-09T10:03:04.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:57.212 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:51:57.212 xnvme_bdev : 5.00 70416.10 275.06 0.00 0.00 904.94 471.04 2904.44 00:51:57.212 [2024-12-09T10:03:04.256Z] =================================================================================================================== 00:51:57.212 [2024-12-09T10:03:04.256Z] Total : 70416.10 275.06 0.00 0.00 904.94 471.04 2904.44 00:51:58.150 10:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:51:58.150 10:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:51:58.150 10:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:51:58.150 10:03:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:51:58.150 10:03:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:51:58.150 { 00:51:58.150 "subsystems": [ 00:51:58.150 { 00:51:58.150 "subsystem": "bdev", 00:51:58.150 "config": [ 00:51:58.150 { 00:51:58.150 "params": { 00:51:58.150 "io_mechanism": "io_uring_cmd", 00:51:58.150 "conserve_cpu": false, 00:51:58.150 "filename": "/dev/ng0n1", 00:51:58.150 "name": "xnvme_bdev" 00:51:58.150 }, 00:51:58.150 "method": "bdev_xnvme_create" 00:51:58.150 }, 00:51:58.150 { 00:51:58.150 "method": "bdev_wait_for_examine" 00:51:58.150 } 00:51:58.150 ] 00:51:58.150 } 00:51:58.150 ] 00:51:58.150 } 00:51:58.150 [2024-12-09 10:03:05.130722] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:58.150 [2024-12-09 10:03:05.130886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73035 ] 00:51:58.411 [2024-12-09 10:03:05.304960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:58.411 [2024-12-09 10:03:05.438722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:58.977 Running I/O for 5 seconds... 00:52:00.849 17541.00 IOPS, 68.52 MiB/s [2024-12-09T10:03:08.839Z] 19167.50 IOPS, 74.87 MiB/s [2024-12-09T10:03:10.212Z] 26639.00 IOPS, 104.06 MiB/s [2024-12-09T10:03:11.145Z] 30253.25 IOPS, 118.18 MiB/s [2024-12-09T10:03:11.145Z] 32563.40 IOPS, 127.20 MiB/s 00:52:04.101 Latency(us) 00:52:04.101 [2024-12-09T10:03:11.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:04.101 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:52:04.101 xnvme_bdev : 5.00 32550.13 127.15 0.00 0.00 1961.37 80.52 20256.58 00:52:04.101 [2024-12-09T10:03:11.145Z] =================================================================================================================== 00:52:04.101 [2024-12-09T10:03:11.145Z] Total : 32550.13 127.15 0.00 0.00 1961.37 80.52 20256.58 00:52:05.037 ************************************ 00:52:05.037 END TEST xnvme_bdevperf 00:52:05.037 ************************************ 00:52:05.037 00:52:05.037 real 0m27.729s 00:52:05.037 user 0m15.860s 00:52:05.037 sys 0m11.452s 00:52:05.037 10:03:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:05.037 10:03:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:52:05.037 10:03:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:52:05.037 10:03:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:05.037 10:03:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:05.037 10:03:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:52:05.295 ************************************ 00:52:05.295 START TEST xnvme_fio_plugin 00:52:05.295 ************************************ 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:05.295 10:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:05.295 { 00:52:05.295 "subsystems": [ 00:52:05.295 { 00:52:05.295 "subsystem": "bdev", 00:52:05.295 "config": [ 00:52:05.295 { 00:52:05.295 "params": { 00:52:05.295 "io_mechanism": "io_uring_cmd", 00:52:05.295 "conserve_cpu": false, 00:52:05.295 "filename": "/dev/ng0n1", 00:52:05.295 "name": "xnvme_bdev" 00:52:05.295 }, 00:52:05.295 "method": "bdev_xnvme_create" 00:52:05.295 }, 00:52:05.295 { 00:52:05.295 "method": "bdev_wait_for_examine" 00:52:05.295 } 00:52:05.295 ] 00:52:05.295 } 00:52:05.295 ] 00:52:05.295 } 00:52:05.553 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:52:05.553 fio-3.35 00:52:05.553 Starting 1 thread 00:52:12.113 00:52:12.113 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73158: Mon Dec 9 10:03:18 2024 00:52:12.113 read: IOPS=45.6k, BW=178MiB/s (187MB/s)(892MiB/5001msec) 00:52:12.113 slat (usec): min=2, max=109, avg= 4.41, stdev= 2.33 00:52:12.113 clat (usec): min=251, max=2583, avg=1225.77, stdev=177.49 00:52:12.113 lat (usec): min=254, max=2590, avg=1230.18, stdev=178.08 00:52:12.113 clat percentiles (usec): 00:52:12.113 | 1.00th=[ 930], 5.00th=[ 996], 10.00th=[ 1037], 20.00th=[ 1090], 00:52:12.113 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1237], 00:52:12.113 | 70.00th=[ 1287], 80.00th=[ 1336], 90.00th=[ 1450], 95.00th=[ 1565], 00:52:12.113 | 99.00th=[ 1795], 99.50th=[ 1893], 99.90th=[ 2089], 99.95th=[ 2180], 00:52:12.113 | 99.99th=[ 2474] 00:52:12.113 bw ( KiB/s): min=166400, max=205312, per=99.07%, avg=180849.78, stdev=10613.29, samples=9 00:52:12.113 iops : min=41600, max=51328, avg=45212.44, stdev=2653.32, samples=9 00:52:12.113 lat (usec) : 500=0.01%, 1000=5.32% 00:52:12.113 lat (msec) : 2=94.46%, 4=0.21% 00:52:12.113 cpu : usr=42.48%, sys=56.44%, ctx=43, majf=0, minf=762 00:52:12.113 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:52:12.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:12.113 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:52:12.113 issued rwts: total=228240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:12.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:52:12.113 00:52:12.113 Run status group 0 (all jobs): 00:52:12.113 READ: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=892MiB (935MB), run=5001-5001msec 00:52:12.680 ----------------------------------------------------- 00:52:12.680 Suppressions used: 00:52:12.680 count bytes template 00:52:12.680 1 11 /usr/src/fio/parse.c 00:52:12.680 1 8 libtcmalloc_minimal.so 00:52:12.680 1 904 libcrypto.so 00:52:12.680 ----------------------------------------------------- 00:52:12.680 00:52:12.680 10:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:12.680 10:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:12.680 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:12.680 10:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:52:12.680 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:12.681 10:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:12.941 { 00:52:12.941 "subsystems": [ 00:52:12.941 { 00:52:12.941 "subsystem": "bdev", 00:52:12.941 "config": [ 00:52:12.941 { 00:52:12.941 "params": { 00:52:12.941 "io_mechanism": "io_uring_cmd", 00:52:12.941 "conserve_cpu": false, 00:52:12.941 "filename": "/dev/ng0n1", 00:52:12.941 "name": "xnvme_bdev" 00:52:12.941 }, 00:52:12.941 "method": "bdev_xnvme_create" 00:52:12.941 }, 00:52:12.941 { 00:52:12.941 "method": "bdev_wait_for_examine" 00:52:12.941 } 00:52:12.941 ] 00:52:12.941 } 00:52:12.941 ] 00:52:12.941 } 00:52:12.941 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:52:12.941 fio-3.35 00:52:12.941 Starting 1 thread 00:52:19.563 00:52:19.563 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73256: Mon Dec 9 10:03:25 2024 00:52:19.563 write: IOPS=44.2k, BW=173MiB/s (181MB/s)(863MiB/5001msec); 0 zone resets 00:52:19.563 slat (usec): min=2, max=121, avg= 4.96, stdev= 2.77 00:52:19.563 clat (usec): min=399, max=3979, avg=1252.83, stdev=175.31 00:52:19.563 lat (usec): min=406, max=3987, avg=1257.79, stdev=175.94 00:52:19.563 clat percentiles (usec): 00:52:19.563 | 1.00th=[ 955], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1106], 00:52:19.563 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1237], 60.00th=[ 1270], 00:52:19.563 | 70.00th=[ 1319], 80.00th=[ 1369], 90.00th=[ 1467], 95.00th=[ 1565], 00:52:19.563 | 99.00th=[ 1795], 99.50th=[ 1876], 99.90th=[ 2008], 99.95th=[ 2147], 00:52:19.563 | 99.99th=[ 3884] 00:52:19.563 bw ( KiB/s): min=166744, max=186880, per=100.00%, avg=177360.89, stdev=6342.74, samples=9 00:52:19.563 iops : min=41686, max=46720, avg=44340.22, stdev=1585.69, samples=9 00:52:19.563 lat (usec) : 500=0.01%, 750=0.01%, 1000=3.35% 00:52:19.563 lat (msec) : 2=96.53%, 4=0.11% 00:52:19.563 cpu : usr=44.80%, sys=53.94%, ctx=12, majf=0, minf=763 00:52:19.563 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:52:19.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:19.563 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:52:19.563 issued rwts: total=0,220861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:19.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:52:19.563 00:52:19.563 Run status group 0 (all jobs): 00:52:19.563 WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=863MiB (905MB), run=5001-5001msec 00:52:20.509 ----------------------------------------------------- 00:52:20.509 Suppressions used: 00:52:20.509 count bytes template 00:52:20.509 1 11 /usr/src/fio/parse.c 00:52:20.509 1 8 libtcmalloc_minimal.so 00:52:20.509 1 904 libcrypto.so 00:52:20.509 ----------------------------------------------------- 00:52:20.509 00:52:20.509 ************************************ 00:52:20.509 END TEST xnvme_fio_plugin 00:52:20.509 ************************************ 00:52:20.509 00:52:20.509 real 0m15.361s 00:52:20.509 user 0m8.571s 00:52:20.509 sys 0m6.397s 00:52:20.509 10:03:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:20.509 10:03:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:52:20.509 10:03:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:52:20.509 10:03:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:52:20.509 10:03:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:52:20.509 10:03:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:52:20.509 10:03:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:20.509 10:03:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:20.509 10:03:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:52:20.509 ************************************ 00:52:20.509 START TEST xnvme_rpc 00:52:20.509 ************************************ 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:52:20.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73341 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73341 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73341 ']' 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:20.509 10:03:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:20.773 [2024-12-09 10:03:27.631795] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:20.773 [2024-12-09 10:03:27.632039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73341 ] 00:52:21.031 [2024-12-09 10:03:27.827029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:21.031 [2024-12-09 10:03:28.007349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:22.405 xnvme_bdev 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73341 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73341 ']' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73341 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73341 00:52:22.405 killing process with pid 73341 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73341' 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73341 00:52:22.405 10:03:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73341 00:52:24.954 ************************************ 00:52:24.954 END TEST xnvme_rpc 00:52:24.954 ************************************ 00:52:24.954 00:52:24.954 real 0m4.238s 00:52:24.954 user 0m4.498s 00:52:24.954 sys 0m0.637s 00:52:24.954 10:03:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:24.954 10:03:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:24.954 10:03:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:52:24.954 10:03:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:24.954 10:03:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:24.954 10:03:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:52:24.954 ************************************ 00:52:24.954 START TEST xnvme_bdevperf 00:52:24.954 ************************************ 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:52:24.954 10:03:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:52:24.954 { 00:52:24.954 "subsystems": [ 00:52:24.954 { 00:52:24.954 "subsystem": "bdev", 00:52:24.954 "config": [ 00:52:24.954 { 00:52:24.954 "params": { 00:52:24.954 "io_mechanism": "io_uring_cmd", 00:52:24.954 "conserve_cpu": true, 00:52:24.954 "filename": "/dev/ng0n1", 00:52:24.954 "name": "xnvme_bdev" 00:52:24.954 }, 00:52:24.954 "method": "bdev_xnvme_create" 00:52:24.954 }, 00:52:24.954 { 00:52:24.954 "method": "bdev_wait_for_examine" 00:52:24.954 } 00:52:24.954 ] 00:52:24.954 } 00:52:24.954 ] 00:52:24.954 } 00:52:24.954 [2024-12-09 10:03:31.898458] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:24.954 [2024-12-09 10:03:31.898652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73427 ] 00:52:25.234 [2024-12-09 10:03:32.082399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:25.493 [2024-12-09 10:03:32.279874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:25.752 Running I/O for 5 seconds... 00:52:27.622 48960.00 IOPS, 191.25 MiB/s [2024-12-09T10:03:36.041Z] 47904.00 IOPS, 187.12 MiB/s [2024-12-09T10:03:36.976Z] 48576.00 IOPS, 189.75 MiB/s [2024-12-09T10:03:37.921Z] 49168.00 IOPS, 192.06 MiB/s 00:52:30.877 Latency(us) 00:52:30.878 [2024-12-09T10:03:37.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:30.878 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:52:30.878 xnvme_bdev : 5.00 49123.70 191.89 0.00 0.00 1298.83 815.48 3336.38 00:52:30.878 [2024-12-09T10:03:37.922Z] =================================================================================================================== 00:52:30.878 [2024-12-09T10:03:37.922Z] Total : 49123.70 191.89 0.00 0.00 1298.83 815.48 3336.38 00:52:31.814 10:03:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:31.814 10:03:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:52:31.814 10:03:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:52:31.814 10:03:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:52:31.814 10:03:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:52:31.814 { 00:52:31.814 "subsystems": [ 00:52:31.814 { 00:52:31.814 "subsystem": "bdev", 00:52:31.814 "config": [ 00:52:31.814 { 00:52:31.814 "params": { 00:52:31.814 "io_mechanism": "io_uring_cmd", 00:52:31.814 "conserve_cpu": true, 00:52:31.814 "filename": "/dev/ng0n1", 00:52:31.814 "name": "xnvme_bdev" 00:52:31.814 }, 00:52:31.814 "method": "bdev_xnvme_create" 00:52:31.814 }, 00:52:31.814 { 00:52:31.814 "method": "bdev_wait_for_examine" 00:52:31.814 } 00:52:31.814 ] 00:52:31.814 } 00:52:31.814 ] 00:52:31.814 } 00:52:31.814 [2024-12-09 10:03:38.824246] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:31.814 [2024-12-09 10:03:38.824684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73502 ] 00:52:32.073 [2024-12-09 10:03:38.998022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:32.331 [2024-12-09 10:03:39.121004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:32.590 Running I/O for 5 seconds... 00:52:34.458 42474.00 IOPS, 165.91 MiB/s [2024-12-09T10:03:42.878Z] 41999.00 IOPS, 164.06 MiB/s [2024-12-09T10:03:43.813Z] 42783.33 IOPS, 167.12 MiB/s [2024-12-09T10:03:44.748Z] 42903.50 IOPS, 167.59 MiB/s 00:52:37.704 Latency(us) 00:52:37.704 [2024-12-09T10:03:44.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:37.704 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:52:37.704 xnvme_bdev : 5.00 43057.09 168.19 0.00 0.00 1481.26 92.16 11081.54 00:52:37.704 [2024-12-09T10:03:44.748Z] =================================================================================================================== 00:52:37.704 [2024-12-09T10:03:44.748Z] Total : 43057.09 168.19 0.00 0.00 1481.26 92.16 11081.54 00:52:38.637 10:03:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:38.637 10:03:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:52:38.637 10:03:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:52:38.637 10:03:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:52:38.637 10:03:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:52:38.895 { 00:52:38.895 "subsystems": [ 00:52:38.895 { 00:52:38.895 "subsystem": "bdev", 00:52:38.895 "config": [ 00:52:38.895 { 00:52:38.895 "params": { 00:52:38.895 "io_mechanism": "io_uring_cmd", 00:52:38.895 "conserve_cpu": true, 00:52:38.895 "filename": "/dev/ng0n1", 00:52:38.895 "name": "xnvme_bdev" 00:52:38.895 }, 00:52:38.895 "method": "bdev_xnvme_create" 00:52:38.895 }, 00:52:38.895 { 00:52:38.895 "method": "bdev_wait_for_examine" 00:52:38.895 } 00:52:38.895 ] 00:52:38.895 } 00:52:38.895 ] 00:52:38.895 } 00:52:38.895 [2024-12-09 10:03:45.754445] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:38.895 [2024-12-09 10:03:45.754626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73583 ] 00:52:39.153 [2024-12-09 10:03:45.940881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:39.153 [2024-12-09 10:03:46.068570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:39.411 Running I/O for 5 seconds... 00:52:41.721 67584.00 IOPS, 264.00 MiB/s [2024-12-09T10:03:49.701Z] 69728.00 IOPS, 272.38 MiB/s [2024-12-09T10:03:50.636Z] 70058.67 IOPS, 273.67 MiB/s [2024-12-09T10:03:51.571Z] 69952.00 IOPS, 273.25 MiB/s 00:52:44.527 Latency(us) 00:52:44.527 [2024-12-09T10:03:51.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:44.527 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:52:44.527 xnvme_bdev : 5.00 69647.21 272.06 0.00 0.00 914.90 498.97 3291.69 00:52:44.527 [2024-12-09T10:03:51.571Z] =================================================================================================================== 00:52:44.527 [2024-12-09T10:03:51.571Z] Total : 69647.21 272.06 0.00 0.00 914.90 498.97 3291.69 00:52:45.905 10:03:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:45.905 10:03:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:52:45.905 10:03:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:52:45.905 10:03:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:52:45.905 10:03:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:52:45.905 { 00:52:45.905 "subsystems": [ 00:52:45.905 { 00:52:45.905 "subsystem": "bdev", 00:52:45.905 "config": [ 00:52:45.905 { 00:52:45.905 "params": { 00:52:45.905 "io_mechanism": "io_uring_cmd", 00:52:45.905 "conserve_cpu": true, 00:52:45.905 "filename": "/dev/ng0n1", 00:52:45.905 "name": "xnvme_bdev" 00:52:45.905 }, 00:52:45.905 "method": "bdev_xnvme_create" 00:52:45.905 }, 00:52:45.905 { 00:52:45.905 "method": "bdev_wait_for_examine" 00:52:45.905 } 00:52:45.905 ] 00:52:45.905 } 00:52:45.905 ] 00:52:45.905 } 00:52:45.905 [2024-12-09 10:03:52.651057] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:45.905 [2024-12-09 10:03:52.651193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73663 ] 00:52:45.905 [2024-12-09 10:03:52.828738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:46.163 [2024-12-09 10:03:52.969898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:46.421 Running I/O for 5 seconds... 00:52:48.731 41506.00 IOPS, 162.13 MiB/s [2024-12-09T10:03:56.709Z] 40577.00 IOPS, 158.50 MiB/s [2024-12-09T10:03:57.644Z] 40293.33 IOPS, 157.40 MiB/s [2024-12-09T10:03:58.623Z] 40192.25 IOPS, 157.00 MiB/s [2024-12-09T10:03:58.623Z] 40281.60 IOPS, 157.35 MiB/s 00:52:51.579 Latency(us) 00:52:51.579 [2024-12-09T10:03:58.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:51.579 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:52:51.579 xnvme_bdev : 5.00 40260.90 157.27 0.00 0.00 1581.12 100.54 16205.27 00:52:51.579 [2024-12-09T10:03:58.623Z] =================================================================================================================== 00:52:51.579 [2024-12-09T10:03:58.623Z] Total : 40260.90 157.27 0.00 0.00 1581.12 100.54 16205.27 00:52:52.515 00:52:52.515 real 0m27.545s 00:52:52.515 user 0m18.136s 00:52:52.515 sys 0m7.126s 00:52:52.515 ************************************ 00:52:52.515 END TEST xnvme_bdevperf 00:52:52.515 ************************************ 00:52:52.515 10:03:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:52.515 10:03:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:52:52.515 10:03:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:52:52.515 10:03:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:52.515 10:03:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:52.515 10:03:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:52:52.515 ************************************ 00:52:52.515 START TEST xnvme_fio_plugin 00:52:52.515 ************************************ 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:52.515 10:03:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:52.515 { 00:52:52.515 "subsystems": [ 00:52:52.515 { 00:52:52.515 "subsystem": "bdev", 00:52:52.515 "config": [ 00:52:52.515 { 00:52:52.515 "params": { 00:52:52.515 "io_mechanism": "io_uring_cmd", 00:52:52.515 "conserve_cpu": true, 00:52:52.515 "filename": "/dev/ng0n1", 00:52:52.515 "name": "xnvme_bdev" 00:52:52.515 }, 00:52:52.515 "method": "bdev_xnvme_create" 00:52:52.515 }, 00:52:52.515 { 00:52:52.516 "method": "bdev_wait_for_examine" 00:52:52.516 } 00:52:52.516 ] 00:52:52.516 } 00:52:52.516 ] 00:52:52.516 } 00:52:52.775 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:52:52.775 fio-3.35 00:52:52.775 Starting 1 thread 00:52:59.339 00:52:59.339 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73776: Mon Dec 9 10:04:05 2024 00:52:59.339 read: IOPS=51.4k, BW=201MiB/s (210MB/s)(1004MiB/5001msec) 00:52:59.339 slat (nsec): min=2630, max=96286, avg=3790.70, stdev=2340.98 00:52:59.339 clat (usec): min=428, max=2438, avg=1093.96, stdev=138.48 00:52:59.339 lat (usec): min=436, max=2443, avg=1097.75, stdev=138.88 00:52:59.339 clat percentiles (usec): 00:52:59.339 | 1.00th=[ 857], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 988], 00:52:59.339 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:52:59.339 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[ 1319], 00:52:59.339 | 99.00th=[ 1582], 99.50th=[ 1713], 99.90th=[ 1926], 99.95th=[ 2040], 00:52:59.339 | 99.99th=[ 2343] 00:52:59.339 bw ( KiB/s): min=195584, max=219136, per=100.00%, avg=206620.44, stdev=7214.07, samples=9 00:52:59.339 iops : min=48896, max=54784, avg=51655.11, stdev=1803.52, samples=9 00:52:59.339 lat (usec) : 500=0.01%, 750=0.01%, 1000=24.47% 00:52:59.339 lat (msec) : 2=75.46%, 4=0.06% 00:52:59.339 cpu : usr=59.60%, sys=37.26%, ctx=24, majf=0, minf=762 00:52:59.339 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:52:59.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:52:59.339 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:52:59.339 issued rwts: total=256912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:52:59.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:52:59.339 00:52:59.339 Run status group 0 (all jobs): 00:52:59.339 READ: bw=201MiB/s (210MB/s), 201MiB/s-201MiB/s (210MB/s-210MB/s), io=1004MiB (1052MB), run=5001-5001msec 00:52:59.906 ----------------------------------------------------- 00:52:59.906 Suppressions used: 00:52:59.906 count bytes template 00:52:59.906 1 11 /usr/src/fio/parse.c 00:52:59.906 1 8 libtcmalloc_minimal.so 00:52:59.906 1 904 libcrypto.so 00:52:59.906 ----------------------------------------------------- 00:52:59.906 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:52:59.906 10:04:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:52:59.906 { 00:52:59.906 "subsystems": [ 00:52:59.906 { 00:52:59.906 "subsystem": "bdev", 00:52:59.906 "config": [ 00:52:59.906 { 00:52:59.906 "params": { 00:52:59.906 "io_mechanism": "io_uring_cmd", 00:52:59.906 "conserve_cpu": true, 00:52:59.906 "filename": "/dev/ng0n1", 00:52:59.906 "name": "xnvme_bdev" 00:52:59.906 }, 00:52:59.906 "method": "bdev_xnvme_create" 00:52:59.906 }, 00:52:59.906 { 00:52:59.906 "method": "bdev_wait_for_examine" 00:52:59.906 } 00:52:59.906 ] 00:52:59.906 } 00:52:59.906 ] 00:52:59.906 } 00:53:00.165 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:53:00.165 fio-3.35 00:53:00.165 Starting 1 thread 00:53:06.747 00:53:06.747 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73874: Mon Dec 9 10:04:12 2024 00:53:06.747 write: IOPS=43.1k, BW=168MiB/s (176MB/s)(841MiB/5001msec); 0 zone resets 00:53:06.747 slat (usec): min=2, max=105, avg= 5.09, stdev= 2.75 00:53:06.747 clat (usec): min=189, max=3505, avg=1285.00, stdev=181.20 00:53:06.747 lat (usec): min=194, max=3509, avg=1290.09, stdev=181.85 00:53:06.747 clat percentiles (usec): 00:53:06.747 | 1.00th=[ 971], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1139], 00:53:06.747 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1303], 00:53:06.747 | 70.00th=[ 1352], 80.00th=[ 1418], 90.00th=[ 1516], 95.00th=[ 1614], 00:53:06.747 | 99.00th=[ 1811], 99.50th=[ 1876], 99.90th=[ 2278], 99.95th=[ 2442], 00:53:06.747 | 99.99th=[ 3032] 00:53:06.747 bw ( KiB/s): min=162304, max=186880, per=100.00%, avg=172580.11, stdev=7688.87, samples=9 00:53:06.747 iops : min=40576, max=46720, avg=43144.89, stdev=1922.16, samples=9 00:53:06.747 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=2.15% 00:53:06.747 lat (msec) : 2=97.58%, 4=0.23% 00:53:06.747 cpu : usr=71.00%, sys=25.90%, ctx=9, majf=0, minf=763 00:53:06.747 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:53:06.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:06.747 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:53:06.747 issued rwts: total=0,215328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:53:06.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:53:06.747 00:53:06.747 Run status group 0 (all jobs): 00:53:06.747 WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=841MiB (882MB), run=5001-5001msec 00:53:07.318 ----------------------------------------------------- 00:53:07.318 Suppressions used: 00:53:07.318 count bytes template 00:53:07.318 1 11 /usr/src/fio/parse.c 00:53:07.318 1 8 libtcmalloc_minimal.so 00:53:07.318 1 904 libcrypto.so 00:53:07.318 ----------------------------------------------------- 00:53:07.318 00:53:07.577 00:53:07.577 real 0m15.008s 00:53:07.577 user 0m10.466s 00:53:07.577 sys 0m3.946s 00:53:07.577 ************************************ 00:53:07.577 END TEST xnvme_fio_plugin 00:53:07.577 ************************************ 00:53:07.577 10:04:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:07.577 10:04:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:53:07.577 10:04:14 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73341 00:53:07.577 10:04:14 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73341 ']' 00:53:07.577 10:04:14 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73341 00:53:07.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73341) - No such process 00:53:07.577 Process with pid 73341 is not found 00:53:07.577 10:04:14 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73341 is not found' 00:53:07.577 00:53:07.577 real 3m53.795s 00:53:07.577 user 2m16.786s 00:53:07.577 sys 1m20.999s 00:53:07.577 ************************************ 00:53:07.577 END TEST nvme_xnvme 00:53:07.577 ************************************ 00:53:07.577 10:04:14 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:07.577 10:04:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:07.577 10:04:14 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:53:07.577 10:04:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:07.577 10:04:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:07.577 10:04:14 -- common/autotest_common.sh@10 -- # set +x 00:53:07.577 ************************************ 00:53:07.577 START TEST blockdev_xnvme 00:53:07.577 ************************************ 00:53:07.577 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:53:07.577 * Looking for test storage... 00:53:07.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:53:07.577 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:07.577 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:07.577 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:07.836 10:04:14 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:07.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.836 --rc genhtml_branch_coverage=1 00:53:07.836 --rc genhtml_function_coverage=1 00:53:07.836 --rc genhtml_legend=1 00:53:07.836 --rc geninfo_all_blocks=1 00:53:07.836 --rc geninfo_unexecuted_blocks=1 00:53:07.836 00:53:07.836 ' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:07.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.836 --rc genhtml_branch_coverage=1 00:53:07.836 --rc genhtml_function_coverage=1 00:53:07.836 --rc genhtml_legend=1 00:53:07.836 --rc geninfo_all_blocks=1 00:53:07.836 --rc geninfo_unexecuted_blocks=1 00:53:07.836 00:53:07.836 ' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:07.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.836 --rc genhtml_branch_coverage=1 00:53:07.836 --rc genhtml_function_coverage=1 00:53:07.836 --rc genhtml_legend=1 00:53:07.836 --rc geninfo_all_blocks=1 00:53:07.836 --rc geninfo_unexecuted_blocks=1 00:53:07.836 00:53:07.836 ' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:07.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.836 --rc genhtml_branch_coverage=1 00:53:07.836 --rc genhtml_function_coverage=1 00:53:07.836 --rc genhtml_legend=1 00:53:07.836 --rc geninfo_all_blocks=1 00:53:07.836 --rc geninfo_unexecuted_blocks=1 00:53:07.836 00:53:07.836 ' 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:53:07.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74007 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74007 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74007 ']' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:07.836 10:04:14 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:07.836 10:04:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:07.836 [2024-12-09 10:04:14.822927] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:07.836 [2024-12-09 10:04:14.823115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74007 ] 00:53:08.094 [2024-12-09 10:04:15.023749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:08.352 [2024-12-09 10:04:15.176498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:09.287 10:04:16 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:09.287 10:04:16 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:53:09.287 10:04:16 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:53:09.287 10:04:16 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:53:09.287 10:04:16 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:53:09.287 10:04:16 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:53:09.287 10:04:16 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:53:09.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:10.114 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:53:10.114 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:53:10.114 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:53:10.114 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:53:10.114 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:53:10.114 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:53:10.115 10:04:17 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:53:10.115 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:53:10.374 nvme0n1 00:53:10.374 nvme0n2 00:53:10.374 nvme0n3 00:53:10.374 nvme1n1 00:53:10.374 nvme2n1 00:53:10.374 nvme3n1 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:10.374 10:04:17 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:53:10.374 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:53:10.375 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "42b35d03-4815-4e7b-81e9-944fce546df7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42b35d03-4815-4e7b-81e9-944fce546df7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "35919adc-439d-4b46-8d81-65138e1fe803"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "35919adc-439d-4b46-8d81-65138e1fe803",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "9de99156-1264-4759-b80e-d110ffa3b02f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9de99156-1264-4759-b80e-d110ffa3b02f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3169bf45-5de8-400e-ae2a-a3a568ad1e5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3169bf45-5de8-400e-ae2a-a3a568ad1e5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6f407784-87ef-461c-8b74-10159c02db44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6f407784-87ef-461c-8b74-10159c02db44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "504fccf2-0177-4b9c-ac92-97ba1dac1198"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "504fccf2-0177-4b9c-ac92-97ba1dac1198",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:53:10.375 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:53:10.375 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:53:10.375 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:53:10.375 10:04:17 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74007 00:53:10.375 10:04:17 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74007 ']' 00:53:10.375 10:04:17 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74007 00:53:10.375 10:04:17 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:53:10.375 10:04:17 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:10.375 10:04:17 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74007 00:53:10.634 killing process with pid 74007 00:53:10.634 10:04:17 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:10.634 10:04:17 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:10.634 10:04:17 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74007' 00:53:10.634 10:04:17 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74007 00:53:10.634 10:04:17 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74007 00:53:13.180 10:04:19 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:53:13.180 10:04:19 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:53:13.180 10:04:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:53:13.180 10:04:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:13.180 10:04:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:13.180 ************************************ 00:53:13.180 START TEST bdev_hello_world 00:53:13.180 ************************************ 00:53:13.180 10:04:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:53:13.180 [2024-12-09 10:04:19.898047] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:13.181 [2024-12-09 10:04:19.898581] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74297 ] 00:53:13.181 [2024-12-09 10:04:20.085783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:13.181 [2024-12-09 10:04:20.217215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:13.747 [2024-12-09 10:04:20.690715] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:53:13.747 [2024-12-09 10:04:20.690778] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:53:13.747 [2024-12-09 10:04:20.690803] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:53:13.747 [2024-12-09 10:04:20.693192] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:53:13.747 [2024-12-09 10:04:20.693528] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:53:13.747 [2024-12-09 10:04:20.693735] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:53:13.747 [2024-12-09 10:04:20.693945] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:53:13.747 00:53:13.747 [2024-12-09 10:04:20.694006] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:53:15.123 00:53:15.123 real 0m2.001s 00:53:15.123 user 0m1.584s 00:53:15.123 sys 0m0.298s 00:53:15.123 10:04:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:15.123 10:04:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:53:15.123 ************************************ 00:53:15.123 END TEST bdev_hello_world 00:53:15.123 ************************************ 00:53:15.123 10:04:21 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:53:15.123 10:04:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:15.123 10:04:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:15.123 10:04:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:15.123 ************************************ 00:53:15.123 START TEST bdev_bounds 00:53:15.123 ************************************ 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74339 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:53:15.123 Process bdevio pid: 74339 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74339' 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74339 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74339 ']' 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:15.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:15.123 10:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:53:15.123 [2024-12-09 10:04:21.902053] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:15.123 [2024-12-09 10:04:21.902228] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74339 ] 00:53:15.123 [2024-12-09 10:04:22.094464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:15.382 [2024-12-09 10:04:22.247772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:15.382 [2024-12-09 10:04:22.247867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:15.382 [2024-12-09 10:04:22.247883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:15.949 10:04:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:15.949 10:04:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:53:15.949 10:04:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:53:16.208 I/O targets: 00:53:16.208 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:53:16.208 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:53:16.208 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:53:16.208 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:53:16.208 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:53:16.208 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:53:16.208 00:53:16.208 00:53:16.208 CUnit - A unit testing framework for C - Version 2.1-3 00:53:16.208 http://cunit.sourceforge.net/ 00:53:16.208 00:53:16.208 00:53:16.208 Suite: bdevio tests on: nvme3n1 00:53:16.208 Test: blockdev write read block ...passed 00:53:16.208 Test: blockdev write zeroes read block ...passed 00:53:16.208 Test: blockdev write zeroes read no split ...passed 00:53:16.208 Test: blockdev write zeroes read split ...passed 00:53:16.208 Test: blockdev write zeroes read split partial ...passed 00:53:16.208 Test: blockdev reset ...passed 00:53:16.208 Test: blockdev write read 8 blocks ...passed 00:53:16.208 Test: blockdev write read size > 128k ...passed 00:53:16.208 Test: blockdev write read invalid size ...passed 00:53:16.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:16.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:16.208 Test: blockdev write read max offset ...passed 00:53:16.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:16.208 Test: blockdev writev readv 8 blocks ...passed 00:53:16.208 Test: blockdev writev readv 30 x 1block ...passed 00:53:16.208 Test: blockdev writev readv block ...passed 00:53:16.208 Test: blockdev writev readv size > 128k ...passed 00:53:16.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:16.208 Test: blockdev comparev and writev ...passed 00:53:16.208 Test: blockdev nvme passthru rw ...passed 00:53:16.208 Test: blockdev nvme passthru vendor specific ...passed 00:53:16.208 Test: blockdev nvme admin passthru ...passed 00:53:16.208 Test: blockdev copy ...passed 00:53:16.208 Suite: bdevio tests on: nvme2n1 00:53:16.208 Test: blockdev write read block ...passed 00:53:16.208 Test: blockdev write zeroes read block ...passed 00:53:16.208 Test: blockdev write zeroes read no split ...passed 00:53:16.208 Test: blockdev write zeroes read split ...passed 00:53:16.208 Test: blockdev write zeroes read split partial ...passed 00:53:16.208 Test: blockdev reset ...passed 00:53:16.208 Test: blockdev write read 8 blocks ...passed 00:53:16.208 Test: blockdev write read size > 128k ...passed 00:53:16.208 Test: blockdev write read invalid size ...passed 00:53:16.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:16.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:16.208 Test: blockdev write read max offset ...passed 00:53:16.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:16.208 Test: blockdev writev readv 8 blocks ...passed 00:53:16.208 Test: blockdev writev readv 30 x 1block ...passed 00:53:16.208 Test: blockdev writev readv block ...passed 00:53:16.208 Test: blockdev writev readv size > 128k ...passed 00:53:16.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:16.208 Test: blockdev comparev and writev ...passed 00:53:16.208 Test: blockdev nvme passthru rw ...passed 00:53:16.208 Test: blockdev nvme passthru vendor specific ...passed 00:53:16.208 Test: blockdev nvme admin passthru ...passed 00:53:16.208 Test: blockdev copy ...passed 00:53:16.208 Suite: bdevio tests on: nvme1n1 00:53:16.209 Test: blockdev write read block ...passed 00:53:16.209 Test: blockdev write zeroes read block ...passed 00:53:16.209 Test: blockdev write zeroes read no split ...passed 00:53:16.467 Test: blockdev write zeroes read split ...passed 00:53:16.467 Test: blockdev write zeroes read split partial ...passed 00:53:16.467 Test: blockdev reset ...passed 00:53:16.467 Test: blockdev write read 8 blocks ...passed 00:53:16.467 Test: blockdev write read size > 128k ...passed 00:53:16.467 Test: blockdev write read invalid size ...passed 00:53:16.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:16.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:16.467 Test: blockdev write read max offset ...passed 00:53:16.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:16.467 Test: blockdev writev readv 8 blocks ...passed 00:53:16.467 Test: blockdev writev readv 30 x 1block ...passed 00:53:16.467 Test: blockdev writev readv block ...passed 00:53:16.467 Test: blockdev writev readv size > 128k ...passed 00:53:16.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:16.467 Test: blockdev comparev and writev ...passed 00:53:16.467 Test: blockdev nvme passthru rw ...passed 00:53:16.467 Test: blockdev nvme passthru vendor specific ...passed 00:53:16.467 Test: blockdev nvme admin passthru ...passed 00:53:16.467 Test: blockdev copy ...passed 00:53:16.467 Suite: bdevio tests on: nvme0n3 00:53:16.467 Test: blockdev write read block ...passed 00:53:16.467 Test: blockdev write zeroes read block ...passed 00:53:16.467 Test: blockdev write zeroes read no split ...passed 00:53:16.467 Test: blockdev write zeroes read split ...passed 00:53:16.467 Test: blockdev write zeroes read split partial ...passed 00:53:16.467 Test: blockdev reset ...passed 00:53:16.467 Test: blockdev write read 8 blocks ...passed 00:53:16.467 Test: blockdev write read size > 128k ...passed 00:53:16.467 Test: blockdev write read invalid size ...passed 00:53:16.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:16.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:16.467 Test: blockdev write read max offset ...passed 00:53:16.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:16.467 Test: blockdev writev readv 8 blocks ...passed 00:53:16.467 Test: blockdev writev readv 30 x 1block ...passed 00:53:16.467 Test: blockdev writev readv block ...passed 00:53:16.467 Test: blockdev writev readv size > 128k ...passed 00:53:16.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:16.467 Test: blockdev comparev and writev ...passed 00:53:16.467 Test: blockdev nvme passthru rw ...passed 00:53:16.467 Test: blockdev nvme passthru vendor specific ...passed 00:53:16.467 Test: blockdev nvme admin passthru ...passed 00:53:16.467 Test: blockdev copy ...passed 00:53:16.467 Suite: bdevio tests on: nvme0n2 00:53:16.467 Test: blockdev write read block ...passed 00:53:16.467 Test: blockdev write zeroes read block ...passed 00:53:16.467 Test: blockdev write zeroes read no split ...passed 00:53:16.467 Test: blockdev write zeroes read split ...passed 00:53:16.467 Test: blockdev write zeroes read split partial ...passed 00:53:16.467 Test: blockdev reset ...passed 00:53:16.467 Test: blockdev write read 8 blocks ...passed 00:53:16.468 Test: blockdev write read size > 128k ...passed 00:53:16.468 Test: blockdev write read invalid size ...passed 00:53:16.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:16.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:16.468 Test: blockdev write read max offset ...passed 00:53:16.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:16.468 Test: blockdev writev readv 8 blocks ...passed 00:53:16.468 Test: blockdev writev readv 30 x 1block ...passed 00:53:16.468 Test: blockdev writev readv block ...passed 00:53:16.468 Test: blockdev writev readv size > 128k ...passed 00:53:16.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:16.468 Test: blockdev comparev and writev ...passed 00:53:16.468 Test: blockdev nvme passthru rw ...passed 00:53:16.468 Test: blockdev nvme passthru vendor specific ...passed 00:53:16.468 Test: blockdev nvme admin passthru ...passed 00:53:16.468 Test: blockdev copy ...passed 00:53:16.468 Suite: bdevio tests on: nvme0n1 00:53:16.468 Test: blockdev write read block ...passed 00:53:16.468 Test: blockdev write zeroes read block ...passed 00:53:16.468 Test: blockdev write zeroes read no split ...passed 00:53:16.726 Test: blockdev write zeroes read split ...passed 00:53:16.726 Test: blockdev write zeroes read split partial ...passed 00:53:16.726 Test: blockdev reset ...passed 00:53:16.726 Test: blockdev write read 8 blocks ...passed 00:53:16.726 Test: blockdev write read size > 128k ...passed 00:53:16.726 Test: blockdev write read invalid size ...passed 00:53:16.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:53:16.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:53:16.726 Test: blockdev write read max offset ...passed 00:53:16.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:53:16.726 Test: blockdev writev readv 8 blocks ...passed 00:53:16.726 Test: blockdev writev readv 30 x 1block ...passed 00:53:16.726 Test: blockdev writev readv block ...passed 00:53:16.726 Test: blockdev writev readv size > 128k ...passed 00:53:16.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:53:16.726 Test: blockdev comparev and writev ...passed 00:53:16.726 Test: blockdev nvme passthru rw ...passed 00:53:16.726 Test: blockdev nvme passthru vendor specific ...passed 00:53:16.726 Test: blockdev nvme admin passthru ...passed 00:53:16.726 Test: blockdev copy ...passed 00:53:16.726 00:53:16.726 Run Summary: Type Total Ran Passed Failed Inactive 00:53:16.726 suites 6 6 n/a 0 0 00:53:16.726 tests 138 138 138 0 0 00:53:16.726 asserts 780 780 780 0 n/a 00:53:16.726 00:53:16.726 Elapsed time = 1.291 seconds 00:53:16.726 0 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74339 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74339 ']' 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74339 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74339 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74339' 00:53:16.726 killing process with pid 74339 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74339 00:53:16.726 10:04:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74339 00:53:18.102 10:04:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:53:18.102 00:53:18.102 real 0m3.031s 00:53:18.102 user 0m7.724s 00:53:18.102 sys 0m0.453s 00:53:18.102 10:04:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:18.102 10:04:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:53:18.102 ************************************ 00:53:18.102 END TEST bdev_bounds 00:53:18.102 ************************************ 00:53:18.102 10:04:24 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:53:18.102 10:04:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:53:18.102 10:04:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:18.102 10:04:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:18.102 ************************************ 00:53:18.102 START TEST bdev_nbd 00:53:18.102 ************************************ 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:53:18.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74403 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74403 /var/tmp/spdk-nbd.sock 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74403 ']' 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:18.102 10:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:53:18.102 [2024-12-09 10:04:25.004527] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:18.102 [2024-12-09 10:04:25.004719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:18.361 [2024-12-09 10:04:25.197996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:18.361 [2024-12-09 10:04:25.339216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:19.297 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:19.556 1+0 records in 00:53:19.556 1+0 records out 00:53:19.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473406 s, 8.7 MB/s 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:19.556 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:19.814 1+0 records in 00:53:19.814 1+0 records out 00:53:19.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452532 s, 9.1 MB/s 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:19.814 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:20.073 1+0 records in 00:53:20.073 1+0 records out 00:53:20.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654217 s, 6.3 MB/s 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:20.073 10:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:20.332 1+0 records in 00:53:20.332 1+0 records out 00:53:20.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697108 s, 5.9 MB/s 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:20.332 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:20.590 1+0 records in 00:53:20.590 1+0 records out 00:53:20.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000943595 s, 4.3 MB/s 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:20.590 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:20.848 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:20.848 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:20.848 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:53:20.848 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:20.848 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:21.106 1+0 records in 00:53:21.106 1+0 records out 00:53:21.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823156 s, 5.0 MB/s 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:53:21.106 10:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd0", 00:53:21.364 "bdev_name": "nvme0n1" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd1", 00:53:21.364 "bdev_name": "nvme0n2" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd2", 00:53:21.364 "bdev_name": "nvme0n3" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd3", 00:53:21.364 "bdev_name": "nvme1n1" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd4", 00:53:21.364 "bdev_name": "nvme2n1" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd5", 00:53:21.364 "bdev_name": "nvme3n1" 00:53:21.364 } 00:53:21.364 ]' 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd0", 00:53:21.364 "bdev_name": "nvme0n1" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd1", 00:53:21.364 "bdev_name": "nvme0n2" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd2", 00:53:21.364 "bdev_name": "nvme0n3" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd3", 00:53:21.364 "bdev_name": "nvme1n1" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd4", 00:53:21.364 "bdev_name": "nvme2n1" 00:53:21.364 }, 00:53:21.364 { 00:53:21.364 "nbd_device": "/dev/nbd5", 00:53:21.364 "bdev_name": "nvme3n1" 00:53:21.364 } 00:53:21.364 ]' 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:21.364 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:21.622 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:53:21.880 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:22.139 10:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:53:22.398 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:53:22.398 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:53:22.398 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:53:22.398 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:22.398 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:22.398 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:53:22.399 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:22.399 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:22.399 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:22.399 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:22.657 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:22.915 10:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:23.172 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:23.741 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:53:24.000 /dev/nbd0 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:24.000 1+0 records in 00:53:24.000 1+0 records out 00:53:24.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698113 s, 5.9 MB/s 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:24.000 10:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:53:24.258 /dev/nbd1 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:24.258 1+0 records in 00:53:24.258 1+0 records out 00:53:24.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612106 s, 6.7 MB/s 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:24.258 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:53:24.517 /dev/nbd10 00:53:24.777 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:53:24.777 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:53:24.777 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:53:24.777 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:24.777 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:24.777 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:24.778 1+0 records in 00:53:24.778 1+0 records out 00:53:24.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514428 s, 8.0 MB/s 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:24.778 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:53:25.037 /dev/nbd11 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:25.037 1+0 records in 00:53:25.037 1+0 records out 00:53:25.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841139 s, 4.9 MB/s 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:25.037 10:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:53:25.295 /dev/nbd12 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:25.295 1+0 records in 00:53:25.295 1+0 records out 00:53:25.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734177 s, 5.6 MB/s 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:25.295 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:53:25.554 /dev/nbd13 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:53:25.554 1+0 records in 00:53:25.554 1+0 records out 00:53:25.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755526 s, 5.4 MB/s 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:25.554 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd0", 00:53:26.121 "bdev_name": "nvme0n1" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd1", 00:53:26.121 "bdev_name": "nvme0n2" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd10", 00:53:26.121 "bdev_name": "nvme0n3" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd11", 00:53:26.121 "bdev_name": "nvme1n1" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd12", 00:53:26.121 "bdev_name": "nvme2n1" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd13", 00:53:26.121 "bdev_name": "nvme3n1" 00:53:26.121 } 00:53:26.121 ]' 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd0", 00:53:26.121 "bdev_name": "nvme0n1" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd1", 00:53:26.121 "bdev_name": "nvme0n2" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd10", 00:53:26.121 "bdev_name": "nvme0n3" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd11", 00:53:26.121 "bdev_name": "nvme1n1" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd12", 00:53:26.121 "bdev_name": "nvme2n1" 00:53:26.121 }, 00:53:26.121 { 00:53:26.121 "nbd_device": "/dev/nbd13", 00:53:26.121 "bdev_name": "nvme3n1" 00:53:26.121 } 00:53:26.121 ]' 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:53:26.121 /dev/nbd1 00:53:26.121 /dev/nbd10 00:53:26.121 /dev/nbd11 00:53:26.121 /dev/nbd12 00:53:26.121 /dev/nbd13' 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:53:26.121 /dev/nbd1 00:53:26.121 /dev/nbd10 00:53:26.121 /dev/nbd11 00:53:26.121 /dev/nbd12 00:53:26.121 /dev/nbd13' 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:53:26.121 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:53:26.122 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:53:26.122 256+0 records in 00:53:26.122 256+0 records out 00:53:26.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996237 s, 105 MB/s 00:53:26.122 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:53:26.122 10:04:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:53:26.122 256+0 records in 00:53:26.122 256+0 records out 00:53:26.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15353 s, 6.8 MB/s 00:53:26.122 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:53:26.122 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:53:26.380 256+0 records in 00:53:26.380 256+0 records out 00:53:26.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128044 s, 8.2 MB/s 00:53:26.380 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:53:26.380 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:53:26.380 256+0 records in 00:53:26.380 256+0 records out 00:53:26.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134967 s, 7.8 MB/s 00:53:26.380 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:53:26.380 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:53:26.639 256+0 records in 00:53:26.639 256+0 records out 00:53:26.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154795 s, 6.8 MB/s 00:53:26.639 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:53:26.639 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:53:26.898 256+0 records in 00:53:26.898 256+0 records out 00:53:26.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157573 s, 6.7 MB/s 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:53:26.898 256+0 records in 00:53:26.898 256+0 records out 00:53:26.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15465 s, 6.8 MB/s 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:26.898 10:04:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:27.465 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:27.723 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:27.982 10:04:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:28.239 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:28.497 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:28.755 10:04:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:53:29.322 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:53:29.580 malloc_lvol_verify 00:53:29.581 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:53:29.839 4204dff8-5b6f-4436-9ae9-3909edf07714 00:53:29.839 10:04:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:53:30.097 e41ca601-71a2-4635-be09-e6fd7dc7fb28 00:53:30.097 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:53:30.356 /dev/nbd0 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:53:30.356 mke2fs 1.47.0 (5-Feb-2023) 00:53:30.356 Discarding device blocks: 0/4096 done 00:53:30.356 Creating filesystem with 4096 1k blocks and 1024 inodes 00:53:30.356 00:53:30.356 Allocating group tables: 0/1 done 00:53:30.356 Writing inode tables: 0/1 done 00:53:30.356 Creating journal (1024 blocks): done 00:53:30.356 Writing superblocks and filesystem accounting information: 0/1 done 00:53:30.356 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:53:30.356 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:53:30.614 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74403 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74403 ']' 00:53:30.872 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74403 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74403 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:30.873 killing process with pid 74403 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74403' 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74403 00:53:30.873 10:04:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74403 00:53:32.249 10:04:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:53:32.249 ************************************ 00:53:32.249 END TEST bdev_nbd 00:53:32.249 ************************************ 00:53:32.249 00:53:32.249 real 0m13.998s 00:53:32.249 user 0m20.030s 00:53:32.249 sys 0m4.559s 00:53:32.249 10:04:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:32.249 10:04:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:53:32.249 10:04:38 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:53:32.249 10:04:38 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:53:32.249 10:04:38 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:53:32.249 10:04:38 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:53:32.249 10:04:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:32.249 10:04:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:32.249 10:04:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:32.249 ************************************ 00:53:32.249 START TEST bdev_fio 00:53:32.249 ************************************ 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:53:32.249 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:32.249 10:04:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:53:32.249 ************************************ 00:53:32.249 START TEST bdev_fio_rw_verify 00:53:32.249 ************************************ 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:53:32.249 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:53:32.250 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:53:32.250 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:53:32.250 10:04:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:53:32.563 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:53:32.563 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:53:32.563 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:53:32.563 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:53:32.563 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:53:32.563 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:53:32.563 fio-3.35 00:53:32.563 Starting 6 threads 00:53:44.780 00:53:44.780 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74841: Mon Dec 9 10:04:50 2024 00:53:44.780 read: IOPS=28.1k, BW=110MiB/s (115MB/s)(1099MiB/10001msec) 00:53:44.780 slat (usec): min=3, max=575, avg= 8.30, stdev= 5.63 00:53:44.780 clat (usec): min=109, max=477686, avg=655.71, stdev=2558.23 00:53:44.780 lat (usec): min=115, max=477696, avg=664.01, stdev=2558.34 00:53:44.780 clat percentiles (usec): 00:53:44.780 | 50.000th=[ 644], 99.000th=[ 1385], 99.900th=[ 3163], 00:53:44.780 | 99.990th=[ 4228], 99.999th=[476054] 00:53:44.780 write: IOPS=28.5k, BW=111MiB/s (117MB/s)(1113MiB/10001msec); 0 zone resets 00:53:44.780 slat (usec): min=7, max=2297, avg=30.10, stdev=31.29 00:53:44.780 clat (usec): min=87, max=5719, avg=727.96, stdev=283.05 00:53:44.781 lat (usec): min=121, max=5744, avg=758.06, stdev=286.23 00:53:44.781 clat percentiles (usec): 00:53:44.781 | 50.000th=[ 725], 99.000th=[ 1483], 99.900th=[ 2769], 99.990th=[ 5538], 00:53:44.781 | 99.999th=[ 5669] 00:53:44.781 bw ( KiB/s): min=84808, max=145324, per=99.73%, avg=113626.79, stdev=2798.35, samples=114 00:53:44.781 iops : min=21202, max=36331, avg=28406.32, stdev=699.57, samples=114 00:53:44.781 lat (usec) : 100=0.01%, 250=3.19%, 500=22.73%, 750=34.58%, 1000=30.13% 00:53:44.781 lat (msec) : 2=9.07%, 4=0.28%, 10=0.02%, 500=0.01% 00:53:44.781 cpu : usr=58.27%, sys=27.55%, ctx=7276, majf=0, minf=24112 00:53:44.781 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:53:44.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:44.781 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:53:44.781 issued rwts: total=281431,284854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:53:44.781 latency : target=0, window=0, percentile=100.00%, depth=8 00:53:44.781 00:53:44.781 Run status group 0 (all jobs): 00:53:44.781 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1099MiB (1153MB), run=10001-10001msec 00:53:44.781 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=1113MiB (1167MB), run=10001-10001msec 00:53:44.781 ----------------------------------------------------- 00:53:44.781 Suppressions used: 00:53:44.781 count bytes template 00:53:44.781 6 48 /usr/src/fio/parse.c 00:53:44.781 3237 310752 /usr/src/fio/iolog.c 00:53:44.781 1 8 libtcmalloc_minimal.so 00:53:44.781 1 904 libcrypto.so 00:53:44.781 ----------------------------------------------------- 00:53:44.781 00:53:44.781 00:53:44.781 real 0m12.677s 00:53:44.781 user 0m37.079s 00:53:44.781 sys 0m16.972s 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:44.781 ************************************ 00:53:44.781 END TEST bdev_fio_rw_verify 00:53:44.781 ************************************ 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "42b35d03-4815-4e7b-81e9-944fce546df7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42b35d03-4815-4e7b-81e9-944fce546df7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "35919adc-439d-4b46-8d81-65138e1fe803"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "35919adc-439d-4b46-8d81-65138e1fe803",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "9de99156-1264-4759-b80e-d110ffa3b02f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9de99156-1264-4759-b80e-d110ffa3b02f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3169bf45-5de8-400e-ae2a-a3a568ad1e5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3169bf45-5de8-400e-ae2a-a3a568ad1e5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6f407784-87ef-461c-8b74-10159c02db44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6f407784-87ef-461c-8b74-10159c02db44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "504fccf2-0177-4b9c-ac92-97ba1dac1198"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "504fccf2-0177-4b9c-ac92-97ba1dac1198",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:53:44.781 /home/vagrant/spdk_repo/spdk 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:53:44.781 00:53:44.781 real 0m12.867s 00:53:44.781 user 0m37.186s 00:53:44.781 sys 0m17.057s 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:44.781 10:04:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:53:44.781 ************************************ 00:53:44.781 END TEST bdev_fio 00:53:44.781 ************************************ 00:53:45.040 10:04:51 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:53:45.040 10:04:51 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:53:45.040 10:04:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:53:45.040 10:04:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:45.040 10:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:45.040 ************************************ 00:53:45.040 START TEST bdev_verify 00:53:45.040 ************************************ 00:53:45.040 10:04:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:53:45.040 [2024-12-09 10:04:51.966410] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:45.040 [2024-12-09 10:04:51.966607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75019 ] 00:53:45.299 [2024-12-09 10:04:52.158757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:53:45.299 [2024-12-09 10:04:52.318059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:45.299 [2024-12-09 10:04:52.318059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:45.866 Running I/O for 5 seconds... 00:53:48.179 22912.00 IOPS, 89.50 MiB/s [2024-12-09T10:04:56.178Z] 22640.00 IOPS, 88.44 MiB/s [2024-12-09T10:04:57.112Z] 22199.67 IOPS, 86.72 MiB/s [2024-12-09T10:04:58.049Z] 21760.00 IOPS, 85.00 MiB/s [2024-12-09T10:04:58.049Z] 21152.00 IOPS, 82.62 MiB/s 00:53:51.005 Latency(us) 00:53:51.005 [2024-12-09T10:04:58.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:51.005 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x0 length 0x80000 00:53:51.005 nvme0n1 : 5.06 1544.35 6.03 0.00 0.00 82736.05 11439.01 68634.07 00:53:51.005 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x80000 length 0x80000 00:53:51.005 nvme0n1 : 5.06 1516.99 5.93 0.00 0.00 84228.17 16920.20 70063.94 00:53:51.005 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x0 length 0x80000 00:53:51.005 nvme0n2 : 5.04 1548.84 6.05 0.00 0.00 82336.55 15192.44 70063.94 00:53:51.005 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x80000 length 0x80000 00:53:51.005 nvme0n2 : 5.05 1519.40 5.94 0.00 0.00 83932.89 18111.77 71017.19 00:53:51.005 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x0 length 0x80000 00:53:51.005 nvme0n3 : 5.06 1543.45 6.03 0.00 0.00 82464.11 20256.58 77689.95 00:53:51.005 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x80000 length 0x80000 00:53:51.005 nvme0n3 : 5.06 1516.47 5.92 0.00 0.00 83911.22 15728.64 76736.70 00:53:51.005 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x0 length 0x20000 00:53:51.005 nvme1n1 : 5.06 1542.84 6.03 0.00 0.00 82331.99 14954.12 81502.95 00:53:51.005 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x20000 length 0x20000 00:53:51.005 nvme1n1 : 5.06 1518.88 5.93 0.00 0.00 83600.80 15490.33 75783.45 00:53:51.005 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x0 length 0xbd0bd 00:53:51.005 nvme2n1 : 5.08 2805.35 10.96 0.00 0.00 45155.86 4438.57 68157.44 00:53:51.005 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:53:51.005 nvme2n1 : 5.07 2778.03 10.85 0.00 0.00 45545.52 3932.16 65297.69 00:53:51.005 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0x0 length 0xa0000 00:53:51.005 nvme3n1 : 5.08 1536.05 6.00 0.00 0.00 82303.48 12153.95 79596.45 00:53:51.005 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:51.005 Verification LBA range: start 0xa0000 length 0xa0000 00:53:51.005 nvme3n1 : 5.07 1514.40 5.92 0.00 0.00 83420.68 4676.89 78643.20 00:53:51.005 [2024-12-09T10:04:58.049Z] =================================================================================================================== 00:53:51.005 [2024-12-09T10:04:58.049Z] Total : 20885.04 81.58 0.00 0.00 73006.63 3932.16 81502.95 00:53:52.382 00:53:52.382 real 0m7.132s 00:53:52.382 user 0m11.180s 00:53:52.382 sys 0m1.840s 00:53:52.382 10:04:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:52.382 10:04:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:53:52.382 ************************************ 00:53:52.382 END TEST bdev_verify 00:53:52.382 ************************************ 00:53:52.382 10:04:59 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:53:52.382 10:04:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:53:52.382 10:04:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:52.382 10:04:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:53:52.382 ************************************ 00:53:52.382 START TEST bdev_verify_big_io 00:53:52.382 ************************************ 00:53:52.382 10:04:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:53:52.382 [2024-12-09 10:04:59.135004] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:52.382 [2024-12-09 10:04:59.135145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75120 ] 00:53:52.382 [2024-12-09 10:04:59.306594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:53:52.641 [2024-12-09 10:04:59.446865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:52.641 [2024-12-09 10:04:59.446882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:53.207 Running I/O for 5 seconds... 00:53:59.034 2160.00 IOPS, 135.00 MiB/s [2024-12-09T10:05:06.078Z] 2821.50 IOPS, 176.34 MiB/s 00:53:59.034 Latency(us) 00:53:59.034 [2024-12-09T10:05:06.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:59.034 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x0 length 0x8000 00:53:59.034 nvme0n1 : 5.85 114.89 7.18 0.00 0.00 1089352.66 45279.42 2653850.53 00:53:59.034 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x8000 length 0x8000 00:53:59.034 nvme0n1 : 5.80 111.80 6.99 0.00 0.00 1122052.08 78643.20 1060015.01 00:53:59.034 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x0 length 0x8000 00:53:59.034 nvme0n2 : 5.81 115.06 7.19 0.00 0.00 1040536.87 157286.40 2318306.21 00:53:59.034 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x8000 length 0x8000 00:53:59.034 nvme0n2 : 5.83 120.77 7.55 0.00 0.00 1008845.98 30504.03 949437.91 00:53:59.034 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x0 length 0x8000 00:53:59.034 nvme0n3 : 5.85 116.06 7.25 0.00 0.00 988577.35 159192.90 1601461.53 00:53:59.034 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x8000 length 0x8000 00:53:59.034 nvme0n3 : 5.82 118.14 7.38 0.00 0.00 1016024.66 18350.08 1143901.09 00:53:59.034 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x0 length 0x2000 00:53:59.034 nvme1n1 : 5.85 150.34 9.40 0.00 0.00 749541.19 37891.72 1143901.09 00:53:59.034 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x2000 length 0x2000 00:53:59.034 nvme1n1 : 5.83 137.19 8.57 0.00 0.00 849708.20 22997.18 1075267.03 00:53:59.034 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x0 length 0xbd0b 00:53:59.034 nvme2n1 : 5.95 161.47 10.09 0.00 0.00 671064.56 9651.67 1525201.45 00:53:59.034 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0xbd0b length 0xbd0b 00:53:59.034 nvme2n1 : 5.83 167.32 10.46 0.00 0.00 676351.60 8817.57 674901.64 00:53:59.034 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0x0 length 0xa000 00:53:59.034 nvme3n1 : 5.95 185.51 11.59 0.00 0.00 573363.86 539.93 869364.83 00:53:59.034 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:53:59.034 Verification LBA range: start 0xa000 length 0xa000 00:53:59.034 nvme3n1 : 5.84 131.59 8.22 0.00 0.00 833134.24 8162.21 1342177.28 00:53:59.034 [2024-12-09T10:05:06.078Z] =================================================================================================================== 00:53:59.034 [2024-12-09T10:05:06.078Z] Total : 1630.15 101.88 0.00 0.00 853802.73 539.93 2653850.53 00:54:00.475 00:54:00.475 real 0m8.255s 00:54:00.475 user 0m14.940s 00:54:00.475 sys 0m0.600s 00:54:00.475 10:05:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:00.475 10:05:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:54:00.475 ************************************ 00:54:00.475 END TEST bdev_verify_big_io 00:54:00.475 ************************************ 00:54:00.475 10:05:07 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:54:00.475 10:05:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:54:00.475 10:05:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:00.475 10:05:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:54:00.475 ************************************ 00:54:00.475 START TEST bdev_write_zeroes 00:54:00.475 ************************************ 00:54:00.475 10:05:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:54:00.475 [2024-12-09 10:05:07.459949] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:00.475 [2024-12-09 10:05:07.460123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75232 ] 00:54:00.733 [2024-12-09 10:05:07.644675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:00.733 [2024-12-09 10:05:07.773030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:01.301 Running I/O for 1 seconds... 00:54:02.495 75200.00 IOPS, 293.75 MiB/s 00:54:02.495 Latency(us) 00:54:02.495 [2024-12-09T10:05:09.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:02.495 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:54:02.495 nvme0n1 : 1.02 11404.68 44.55 0.00 0.00 11211.46 6196.13 23592.96 00:54:02.495 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:54:02.495 nvme0n2 : 1.02 11391.12 44.50 0.00 0.00 11214.41 6464.23 23950.43 00:54:02.495 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:54:02.495 nvme0n3 : 1.02 11377.64 44.44 0.00 0.00 11216.68 6464.23 24307.90 00:54:02.495 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:54:02.495 nvme1n1 : 1.02 11364.47 44.39 0.00 0.00 11219.10 6464.23 24546.21 00:54:02.495 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:54:02.495 nvme2n1 : 1.03 17338.94 67.73 0.00 0.00 7344.14 2263.97 15728.64 00:54:02.495 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:54:02.495 nvme3n1 : 1.03 11384.59 44.47 0.00 0.00 11121.33 4408.79 24069.59 00:54:02.495 [2024-12-09T10:05:09.539Z] =================================================================================================================== 00:54:02.495 [2024-12-09T10:05:09.539Z] Total : 74261.43 290.08 0.00 0.00 10294.01 2263.97 24546.21 00:54:03.431 00:54:03.431 real 0m3.018s 00:54:03.431 user 0m2.151s 00:54:03.431 sys 0m0.667s 00:54:03.431 10:05:10 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:03.431 10:05:10 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:54:03.431 ************************************ 00:54:03.431 END TEST bdev_write_zeroes 00:54:03.431 ************************************ 00:54:03.431 10:05:10 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:54:03.431 10:05:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:54:03.431 10:05:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:03.431 10:05:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:54:03.431 ************************************ 00:54:03.431 START TEST bdev_json_nonenclosed 00:54:03.431 ************************************ 00:54:03.431 10:05:10 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:54:03.693 [2024-12-09 10:05:10.524757] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:03.693 [2024-12-09 10:05:10.525614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75290 ] 00:54:03.693 [2024-12-09 10:05:10.708622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:03.959 [2024-12-09 10:05:10.838229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:03.959 [2024-12-09 10:05:10.838354] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:54:03.959 [2024-12-09 10:05:10.838385] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:54:03.959 [2024-12-09 10:05:10.838400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:54:04.217 00:54:04.217 real 0m0.676s 00:54:04.217 user 0m0.431s 00:54:04.217 sys 0m0.139s 00:54:04.217 10:05:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:04.217 ************************************ 00:54:04.217 10:05:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:54:04.217 END TEST bdev_json_nonenclosed 00:54:04.217 ************************************ 00:54:04.217 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:54:04.217 10:05:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:54:04.217 10:05:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:04.217 10:05:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:54:04.217 ************************************ 00:54:04.217 START TEST bdev_json_nonarray 00:54:04.217 ************************************ 00:54:04.217 10:05:11 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:54:04.217 [2024-12-09 10:05:11.261040] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:04.217 [2024-12-09 10:05:11.261218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75317 ] 00:54:04.475 [2024-12-09 10:05:11.440357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:04.753 [2024-12-09 10:05:11.566812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:04.753 [2024-12-09 10:05:11.566960] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:54:04.753 [2024-12-09 10:05:11.566996] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:54:04.753 [2024-12-09 10:05:11.567011] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:54:05.041 00:54:05.041 real 0m0.686s 00:54:05.041 user 0m0.434s 00:54:05.041 sys 0m0.146s 00:54:05.041 10:05:11 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:05.041 10:05:11 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:54:05.041 ************************************ 00:54:05.041 END TEST bdev_json_nonarray 00:54:05.041 ************************************ 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:54:05.041 10:05:11 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:54:05.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:54:06.176 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:54:06.176 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:54:06.176 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:54:06.176 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:54:06.435 00:54:06.435 real 0m58.765s 00:54:06.435 user 1m42.058s 00:54:06.435 sys 0m28.641s 00:54:06.435 10:05:13 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:06.435 10:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:54:06.435 ************************************ 00:54:06.435 END TEST blockdev_xnvme 00:54:06.435 ************************************ 00:54:06.435 10:05:13 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:54:06.435 10:05:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:06.435 10:05:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:06.435 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:54:06.435 ************************************ 00:54:06.435 START TEST ublk 00:54:06.435 ************************************ 00:54:06.435 10:05:13 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:54:06.435 * Looking for test storage... 00:54:06.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:54:06.435 10:05:13 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:06.435 10:05:13 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:06.435 10:05:13 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:54:06.435 10:05:13 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:06.435 10:05:13 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:06.435 10:05:13 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:06.435 10:05:13 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:06.435 10:05:13 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:54:06.435 10:05:13 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:54:06.435 10:05:13 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:54:06.435 10:05:13 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:54:06.435 10:05:13 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:54:06.435 10:05:13 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:54:06.435 10:05:13 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:54:06.435 10:05:13 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:06.435 10:05:13 ublk -- scripts/common.sh@344 -- # case "$op" in 00:54:06.435 10:05:13 ublk -- scripts/common.sh@345 -- # : 1 00:54:06.435 10:05:13 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:06.435 10:05:13 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:06.435 10:05:13 ublk -- scripts/common.sh@365 -- # decimal 1 00:54:06.693 10:05:13 ublk -- scripts/common.sh@353 -- # local d=1 00:54:06.693 10:05:13 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:06.693 10:05:13 ublk -- scripts/common.sh@355 -- # echo 1 00:54:06.693 10:05:13 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:54:06.693 10:05:13 ublk -- scripts/common.sh@366 -- # decimal 2 00:54:06.693 10:05:13 ublk -- scripts/common.sh@353 -- # local d=2 00:54:06.693 10:05:13 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:06.694 10:05:13 ublk -- scripts/common.sh@355 -- # echo 2 00:54:06.694 10:05:13 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:54:06.694 10:05:13 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:06.694 10:05:13 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:06.694 10:05:13 ublk -- scripts/common.sh@368 -- # return 0 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:06.694 --rc genhtml_branch_coverage=1 00:54:06.694 --rc genhtml_function_coverage=1 00:54:06.694 --rc genhtml_legend=1 00:54:06.694 --rc geninfo_all_blocks=1 00:54:06.694 --rc geninfo_unexecuted_blocks=1 00:54:06.694 00:54:06.694 ' 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:06.694 --rc genhtml_branch_coverage=1 00:54:06.694 --rc genhtml_function_coverage=1 00:54:06.694 --rc genhtml_legend=1 00:54:06.694 --rc geninfo_all_blocks=1 00:54:06.694 --rc geninfo_unexecuted_blocks=1 00:54:06.694 00:54:06.694 ' 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:06.694 --rc genhtml_branch_coverage=1 00:54:06.694 --rc genhtml_function_coverage=1 00:54:06.694 --rc genhtml_legend=1 00:54:06.694 --rc geninfo_all_blocks=1 00:54:06.694 --rc geninfo_unexecuted_blocks=1 00:54:06.694 00:54:06.694 ' 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:06.694 --rc genhtml_branch_coverage=1 00:54:06.694 --rc genhtml_function_coverage=1 00:54:06.694 --rc genhtml_legend=1 00:54:06.694 --rc geninfo_all_blocks=1 00:54:06.694 --rc geninfo_unexecuted_blocks=1 00:54:06.694 00:54:06.694 ' 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:54:06.694 10:05:13 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:54:06.694 10:05:13 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:54:06.694 10:05:13 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:54:06.694 10:05:13 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:54:06.694 10:05:13 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:54:06.694 10:05:13 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:54:06.694 10:05:13 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:54:06.694 10:05:13 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:54:06.694 10:05:13 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:06.694 10:05:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:54:06.694 ************************************ 00:54:06.694 START TEST test_save_ublk_config 00:54:06.694 ************************************ 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75602 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75602 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75602 ']' 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:06.694 10:05:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:54:06.694 [2024-12-09 10:05:13.695622] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:06.694 [2024-12-09 10:05:13.697121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75602 ] 00:54:06.952 [2024-12-09 10:05:13.904509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:07.211 [2024-12-09 10:05:14.049921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:08.147 10:05:14 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:08.147 10:05:14 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:54:08.147 10:05:14 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:54:08.147 10:05:14 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:54:08.147 10:05:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:08.147 10:05:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:54:08.147 [2024-12-09 10:05:14.960343] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:54:08.147 [2024-12-09 10:05:14.961540] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:54:08.147 malloc0 00:54:08.147 [2024-12-09 10:05:15.047852] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:54:08.147 [2024-12-09 10:05:15.048010] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:54:08.147 [2024-12-09 10:05:15.048028] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:54:08.147 [2024-12-09 10:05:15.048038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:54:08.147 [2024-12-09 10:05:15.055312] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:08.147 [2024-12-09 10:05:15.055343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:08.147 [2024-12-09 10:05:15.062328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:08.147 [2024-12-09 10:05:15.062454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:54:08.147 [2024-12-09 10:05:15.086296] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:54:08.147 0 00:54:08.147 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:08.147 10:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:54:08.147 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:08.147 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:54:08.405 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:08.405 10:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:54:08.405 "subsystems": [ 00:54:08.405 { 00:54:08.405 "subsystem": "fsdev", 00:54:08.405 "config": [ 00:54:08.405 { 00:54:08.405 "method": "fsdev_set_opts", 00:54:08.405 "params": { 00:54:08.405 "fsdev_io_pool_size": 65535, 00:54:08.406 "fsdev_io_cache_size": 256 00:54:08.406 } 00:54:08.406 } 00:54:08.406 ] 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "subsystem": "keyring", 00:54:08.406 "config": [] 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "subsystem": "iobuf", 00:54:08.406 "config": [ 00:54:08.406 { 00:54:08.406 "method": "iobuf_set_options", 00:54:08.406 "params": { 00:54:08.406 "small_pool_count": 8192, 00:54:08.406 "large_pool_count": 1024, 00:54:08.406 "small_bufsize": 8192, 00:54:08.406 "large_bufsize": 135168, 00:54:08.406 "enable_numa": false 00:54:08.406 } 00:54:08.406 } 00:54:08.406 ] 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "subsystem": "sock", 00:54:08.406 "config": [ 00:54:08.406 { 00:54:08.406 "method": "sock_set_default_impl", 00:54:08.406 "params": { 00:54:08.406 "impl_name": "posix" 00:54:08.406 } 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "method": "sock_impl_set_options", 00:54:08.406 "params": { 00:54:08.406 "impl_name": "ssl", 00:54:08.406 "recv_buf_size": 4096, 00:54:08.406 "send_buf_size": 4096, 00:54:08.406 "enable_recv_pipe": true, 00:54:08.406 "enable_quickack": false, 00:54:08.406 "enable_placement_id": 0, 00:54:08.406 "enable_zerocopy_send_server": true, 00:54:08.406 "enable_zerocopy_send_client": false, 00:54:08.406 "zerocopy_threshold": 0, 00:54:08.406 "tls_version": 0, 00:54:08.406 "enable_ktls": false 00:54:08.406 } 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "method": "sock_impl_set_options", 00:54:08.406 "params": { 00:54:08.406 "impl_name": "posix", 00:54:08.406 "recv_buf_size": 2097152, 00:54:08.406 "send_buf_size": 2097152, 00:54:08.406 "enable_recv_pipe": true, 00:54:08.406 "enable_quickack": false, 00:54:08.406 "enable_placement_id": 0, 00:54:08.406 "enable_zerocopy_send_server": true, 00:54:08.406 "enable_zerocopy_send_client": false, 00:54:08.406 "zerocopy_threshold": 0, 00:54:08.406 "tls_version": 0, 00:54:08.406 "enable_ktls": false 00:54:08.406 } 00:54:08.406 } 00:54:08.406 ] 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "subsystem": "vmd", 00:54:08.406 "config": [] 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "subsystem": "accel", 00:54:08.406 "config": [ 00:54:08.406 { 00:54:08.406 "method": "accel_set_options", 00:54:08.406 "params": { 00:54:08.406 "small_cache_size": 128, 00:54:08.406 "large_cache_size": 16, 00:54:08.406 "task_count": 2048, 00:54:08.406 "sequence_count": 2048, 00:54:08.406 "buf_count": 2048 00:54:08.406 } 00:54:08.406 } 00:54:08.406 ] 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "subsystem": "bdev", 00:54:08.406 "config": [ 00:54:08.406 { 00:54:08.406 "method": "bdev_set_options", 00:54:08.406 "params": { 00:54:08.406 "bdev_io_pool_size": 65535, 00:54:08.406 "bdev_io_cache_size": 256, 00:54:08.406 "bdev_auto_examine": true, 00:54:08.406 "iobuf_small_cache_size": 128, 00:54:08.406 "iobuf_large_cache_size": 16 00:54:08.406 } 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "method": "bdev_raid_set_options", 00:54:08.406 "params": { 00:54:08.406 "process_window_size_kb": 1024, 00:54:08.406 "process_max_bandwidth_mb_sec": 0 00:54:08.406 } 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "method": "bdev_iscsi_set_options", 00:54:08.406 "params": { 00:54:08.406 "timeout_sec": 30 00:54:08.406 } 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "method": "bdev_nvme_set_options", 00:54:08.406 "params": { 00:54:08.406 "action_on_timeout": "none", 00:54:08.406 "timeout_us": 0, 00:54:08.406 "timeout_admin_us": 0, 00:54:08.406 "keep_alive_timeout_ms": 10000, 00:54:08.406 "arbitration_burst": 0, 00:54:08.406 "low_priority_weight": 0, 00:54:08.406 "medium_priority_weight": 0, 00:54:08.406 "high_priority_weight": 0, 00:54:08.406 "nvme_adminq_poll_period_us": 10000, 00:54:08.406 "nvme_ioq_poll_period_us": 0, 00:54:08.406 "io_queue_requests": 0, 00:54:08.406 "delay_cmd_submit": true, 00:54:08.406 "transport_retry_count": 4, 00:54:08.406 "bdev_retry_count": 3, 00:54:08.406 "transport_ack_timeout": 0, 00:54:08.406 "ctrlr_loss_timeout_sec": 0, 00:54:08.406 "reconnect_delay_sec": 0, 00:54:08.406 "fast_io_fail_timeout_sec": 0, 00:54:08.406 "disable_auto_failback": false, 00:54:08.406 "generate_uuids": false, 00:54:08.406 "transport_tos": 0, 00:54:08.406 "nvme_error_stat": false, 00:54:08.406 "rdma_srq_size": 0, 00:54:08.406 "io_path_stat": false, 00:54:08.406 "allow_accel_sequence": false, 00:54:08.406 "rdma_max_cq_size": 0, 00:54:08.406 "rdma_cm_event_timeout_ms": 0, 00:54:08.406 "dhchap_digests": [ 00:54:08.406 "sha256", 00:54:08.406 "sha384", 00:54:08.406 "sha512" 00:54:08.406 ], 00:54:08.406 "dhchap_dhgroups": [ 00:54:08.406 "null", 00:54:08.406 "ffdhe2048", 00:54:08.406 "ffdhe3072", 00:54:08.406 "ffdhe4096", 00:54:08.406 "ffdhe6144", 00:54:08.406 "ffdhe8192" 00:54:08.406 ] 00:54:08.406 } 00:54:08.406 }, 00:54:08.406 { 00:54:08.406 "method": "bdev_nvme_set_hotplug", 00:54:08.406 "params": { 00:54:08.406 "period_us": 100000, 00:54:08.406 "enable": false 00:54:08.406 } 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "method": "bdev_malloc_create", 00:54:08.407 "params": { 00:54:08.407 "name": "malloc0", 00:54:08.407 "num_blocks": 8192, 00:54:08.407 "block_size": 4096, 00:54:08.407 "physical_block_size": 4096, 00:54:08.407 "uuid": "9e68edee-6476-4ffb-8d6e-b9a4a71e0cf8", 00:54:08.407 "optimal_io_boundary": 0, 00:54:08.407 "md_size": 0, 00:54:08.407 "dif_type": 0, 00:54:08.407 "dif_is_head_of_md": false, 00:54:08.407 "dif_pi_format": 0 00:54:08.407 } 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "method": "bdev_wait_for_examine" 00:54:08.407 } 00:54:08.407 ] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "scsi", 00:54:08.407 "config": null 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "scheduler", 00:54:08.407 "config": [ 00:54:08.407 { 00:54:08.407 "method": "framework_set_scheduler", 00:54:08.407 "params": { 00:54:08.407 "name": "static" 00:54:08.407 } 00:54:08.407 } 00:54:08.407 ] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "vhost_scsi", 00:54:08.407 "config": [] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "vhost_blk", 00:54:08.407 "config": [] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "ublk", 00:54:08.407 "config": [ 00:54:08.407 { 00:54:08.407 "method": "ublk_create_target", 00:54:08.407 "params": { 00:54:08.407 "cpumask": "1" 00:54:08.407 } 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "method": "ublk_start_disk", 00:54:08.407 "params": { 00:54:08.407 "bdev_name": "malloc0", 00:54:08.407 "ublk_id": 0, 00:54:08.407 "num_queues": 1, 00:54:08.407 "queue_depth": 128 00:54:08.407 } 00:54:08.407 } 00:54:08.407 ] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "nbd", 00:54:08.407 "config": [] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "nvmf", 00:54:08.407 "config": [ 00:54:08.407 { 00:54:08.407 "method": "nvmf_set_config", 00:54:08.407 "params": { 00:54:08.407 "discovery_filter": "match_any", 00:54:08.407 "admin_cmd_passthru": { 00:54:08.407 "identify_ctrlr": false 00:54:08.407 }, 00:54:08.407 "dhchap_digests": [ 00:54:08.407 "sha256", 00:54:08.407 "sha384", 00:54:08.407 "sha512" 00:54:08.407 ], 00:54:08.407 "dhchap_dhgroups": [ 00:54:08.407 "null", 00:54:08.407 "ffdhe2048", 00:54:08.407 "ffdhe3072", 00:54:08.407 "ffdhe4096", 00:54:08.407 "ffdhe6144", 00:54:08.407 "ffdhe8192" 00:54:08.407 ] 00:54:08.407 } 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "method": "nvmf_set_max_subsystems", 00:54:08.407 "params": { 00:54:08.407 "max_subsystems": 1024 00:54:08.407 } 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "method": "nvmf_set_crdt", 00:54:08.407 "params": { 00:54:08.407 "crdt1": 0, 00:54:08.407 "crdt2": 0, 00:54:08.407 "crdt3": 0 00:54:08.407 } 00:54:08.407 } 00:54:08.407 ] 00:54:08.407 }, 00:54:08.407 { 00:54:08.407 "subsystem": "iscsi", 00:54:08.407 "config": [ 00:54:08.407 { 00:54:08.407 "method": "iscsi_set_options", 00:54:08.407 "params": { 00:54:08.407 "node_base": "iqn.2016-06.io.spdk", 00:54:08.407 "max_sessions": 128, 00:54:08.407 "max_connections_per_session": 2, 00:54:08.407 "max_queue_depth": 64, 00:54:08.407 "default_time2wait": 2, 00:54:08.407 "default_time2retain": 20, 00:54:08.407 "first_burst_length": 8192, 00:54:08.407 "immediate_data": true, 00:54:08.407 "allow_duplicated_isid": false, 00:54:08.407 "error_recovery_level": 0, 00:54:08.407 "nop_timeout": 60, 00:54:08.407 "nop_in_interval": 30, 00:54:08.407 "disable_chap": false, 00:54:08.407 "require_chap": false, 00:54:08.407 "mutual_chap": false, 00:54:08.407 "chap_group": 0, 00:54:08.407 "max_large_datain_per_connection": 64, 00:54:08.407 "max_r2t_per_connection": 4, 00:54:08.407 "pdu_pool_size": 36864, 00:54:08.407 "immediate_data_pool_size": 16384, 00:54:08.407 "data_out_pool_size": 2048 00:54:08.407 } 00:54:08.407 } 00:54:08.407 ] 00:54:08.407 } 00:54:08.407 ] 00:54:08.407 }' 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75602 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75602 ']' 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75602 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75602 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75602' 00:54:08.407 killing process with pid 75602 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75602 00:54:08.407 10:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75602 00:54:09.784 [2024-12-09 10:05:16.751546] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:54:09.784 [2024-12-09 10:05:16.790373] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:09.785 [2024-12-09 10:05:16.790509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:54:09.785 [2024-12-09 10:05:16.798295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:09.785 [2024-12-09 10:05:16.798357] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:54:09.785 [2024-12-09 10:05:16.798388] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:54:09.785 [2024-12-09 10:05:16.798422] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:54:09.785 [2024-12-09 10:05:16.798607] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75668 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75668 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75668 ']' 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:11.690 10:05:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:54:11.690 "subsystems": [ 00:54:11.690 { 00:54:11.690 "subsystem": "fsdev", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "fsdev_set_opts", 00:54:11.690 "params": { 00:54:11.690 "fsdev_io_pool_size": 65535, 00:54:11.690 "fsdev_io_cache_size": 256 00:54:11.690 } 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "keyring", 00:54:11.690 "config": [] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "iobuf", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "iobuf_set_options", 00:54:11.690 "params": { 00:54:11.690 "small_pool_count": 8192, 00:54:11.690 "large_pool_count": 1024, 00:54:11.690 "small_bufsize": 8192, 00:54:11.690 "large_bufsize": 135168, 00:54:11.690 "enable_numa": false 00:54:11.690 } 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "sock", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "sock_set_default_impl", 00:54:11.690 "params": { 00:54:11.690 "impl_name": "posix" 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "sock_impl_set_options", 00:54:11.690 "params": { 00:54:11.690 "impl_name": "ssl", 00:54:11.690 "recv_buf_size": 4096, 00:54:11.690 "send_buf_size": 4096, 00:54:11.690 "enable_recv_pipe": true, 00:54:11.690 "enable_quickack": false, 00:54:11.690 "enable_placement_id": 0, 00:54:11.690 "enable_zerocopy_send_server": true, 00:54:11.690 "enable_zerocopy_send_client": false, 00:54:11.690 "zerocopy_threshold": 0, 00:54:11.690 "tls_version": 0, 00:54:11.690 "enable_ktls": false 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "sock_impl_set_options", 00:54:11.690 "params": { 00:54:11.690 "impl_name": "posix", 00:54:11.690 "recv_buf_size": 2097152, 00:54:11.690 "send_buf_size": 2097152, 00:54:11.690 "enable_recv_pipe": true, 00:54:11.690 "enable_quickack": false, 00:54:11.690 "enable_placement_id": 0, 00:54:11.690 "enable_zerocopy_send_server": true, 00:54:11.690 "enable_zerocopy_send_client": false, 00:54:11.690 "zerocopy_threshold": 0, 00:54:11.690 "tls_version": 0, 00:54:11.690 "enable_ktls": false 00:54:11.690 } 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "vmd", 00:54:11.690 "config": [] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "accel", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "accel_set_options", 00:54:11.690 "params": { 00:54:11.690 "small_cache_size": 128, 00:54:11.690 "large_cache_size": 16, 00:54:11.690 "task_count": 2048, 00:54:11.690 "sequence_count": 2048, 00:54:11.690 "buf_count": 2048 00:54:11.690 } 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "bdev", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "bdev_set_options", 00:54:11.690 "params": { 00:54:11.690 "bdev_io_pool_size": 65535, 00:54:11.690 "bdev_io_cache_size": 256, 00:54:11.690 "bdev_auto_examine": true, 00:54:11.690 "iobuf_small_cache_size": 128, 00:54:11.690 "iobuf_large_cache_size": 16 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "bdev_raid_set_options", 00:54:11.690 "params": { 00:54:11.690 "process_window_size_kb": 1024, 00:54:11.690 "process_max_bandwidth_mb_sec": 0 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "bdev_iscsi_set_options", 00:54:11.690 "params": { 00:54:11.690 "timeout_sec": 30 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "bdev_nvme_set_options", 00:54:11.690 "params": { 00:54:11.690 "action_on_timeout": "none", 00:54:11.690 "timeout_us": 0, 00:54:11.690 "timeout_admin_us": 0, 00:54:11.690 "keep_alive_timeout_ms": 10000, 00:54:11.690 "arbitration_burst": 0, 00:54:11.690 "low_priority_weight": 0, 00:54:11.690 "medium_priority_weight": 0, 00:54:11.690 "high_priority_weight": 0, 00:54:11.690 "nvme_adminq_poll_period_us": 10000, 00:54:11.690 "nvme_ioq_poll_period_us": 0, 00:54:11.690 "io_queue_requests": 0, 00:54:11.690 "delay_cmd_submit": true, 00:54:11.690 "transport_retry_count": 4, 00:54:11.690 "bdev_retry_count": 3, 00:54:11.690 "transport_ack_timeout": 0, 00:54:11.690 "ctrlr_loss_timeout_sec": 0, 00:54:11.690 "reconnect_delay_sec": 0, 00:54:11.690 "fast_io_fail_timeout_sec": 0, 00:54:11.690 "disable_auto_failback": false, 00:54:11.690 "generate_uuids": false, 00:54:11.690 "transport_tos": 0, 00:54:11.690 "nvme_error_stat": false, 00:54:11.690 "rdma_srq_size": 0, 00:54:11.690 "io_path_stat": false, 00:54:11.690 "allow_accel_sequence": false, 00:54:11.690 "rdma_max_cq_size": 0, 00:54:11.690 "rdma_cm_event_timeout_ms": 0, 00:54:11.690 "dhchap_digests": [ 00:54:11.690 "sha256", 00:54:11.690 "sha384", 00:54:11.690 "sha512" 00:54:11.690 ], 00:54:11.690 "dhchap_dhgroups": [ 00:54:11.690 "null", 00:54:11.690 "ffdhe2048", 00:54:11.690 "ffdhe3072", 00:54:11.690 "ffdhe4096", 00:54:11.690 "ffdhe6144", 00:54:11.690 "ffdhe8192" 00:54:11.690 ] 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "bdev_nvme_set_hotplug", 00:54:11.690 "params": { 00:54:11.690 "period_us": 100000, 00:54:11.690 "enable": false 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "bdev_malloc_create", 00:54:11.690 "params": { 00:54:11.690 "name": "malloc0", 00:54:11.690 "num_blocks": 8192, 00:54:11.690 "block_size": 4096, 00:54:11.690 "physical_block_size": 4096, 00:54:11.690 "uuid": "9e68edee-6476-4ffb-8d6e-b9a4a71e0cf8", 00:54:11.690 "optimal_io_boundary": 0, 00:54:11.690 "md_size": 0, 00:54:11.690 "dif_type": 0, 00:54:11.690 "dif_is_head_of_md": false, 00:54:11.690 "dif_pi_format": 0 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "bdev_wait_for_examine" 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "scsi", 00:54:11.690 "config": null 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "scheduler", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "framework_set_scheduler", 00:54:11.690 "params": { 00:54:11.690 "name": "static" 00:54:11.690 } 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "vhost_scsi", 00:54:11.690 "config": [] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "vhost_blk", 00:54:11.690 "config": [] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "ublk", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "ublk_create_target", 00:54:11.690 "params": { 00:54:11.690 "cpumask": "1" 00:54:11.690 } 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "method": "ublk_start_disk", 00:54:11.690 "params": { 00:54:11.690 "bdev_name": "malloc0", 00:54:11.690 "ublk_id": 0, 00:54:11.690 "num_queues": 1, 00:54:11.690 "queue_depth": 128 00:54:11.690 } 00:54:11.690 } 00:54:11.690 ] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "nbd", 00:54:11.690 "config": [] 00:54:11.690 }, 00:54:11.690 { 00:54:11.690 "subsystem": "nvmf", 00:54:11.690 "config": [ 00:54:11.690 { 00:54:11.690 "method": "nvmf_set_config", 00:54:11.690 "params": { 00:54:11.690 "discovery_filter": "match_any", 00:54:11.690 "admin_cmd_passthru": { 00:54:11.690 "identify_ctrlr": false 00:54:11.690 }, 00:54:11.690 "dhchap_digests": [ 00:54:11.690 "sha256", 00:54:11.690 "sha384", 00:54:11.690 "sha512" 00:54:11.690 ], 00:54:11.690 "dhchap_dhgroups": [ 00:54:11.690 "null", 00:54:11.690 "ffdhe2048", 00:54:11.690 "ffdhe3072", 00:54:11.690 "ffdhe4096", 00:54:11.690 "ffdhe6144", 00:54:11.690 "ffdhe81Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:11.691 10:05:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:11.691 10:05:18 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:11.691 92" 00:54:11.691 ] 00:54:11.691 } 00:54:11.691 }, 00:54:11.691 { 00:54:11.691 "method": "nvmf_set_max_subsystems", 00:54:11.691 "params": { 00:54:11.691 "max_subsystems": 1024 00:54:11.691 } 00:54:11.691 }, 00:54:11.691 { 00:54:11.691 "method": "nvmf_set_crdt", 00:54:11.691 "params": { 00:54:11.691 "crdt1": 0, 00:54:11.691 "crdt2": 0, 00:54:11.691 "crdt3": 0 00:54:11.691 } 00:54:11.691 } 00:54:11.691 ] 00:54:11.691 }, 00:54:11.691 { 00:54:11.691 "subsystem": "iscsi", 00:54:11.691 "config": [ 00:54:11.691 { 00:54:11.691 "method": "iscsi_set_options", 00:54:11.691 "params": { 00:54:11.691 "node_base": "iqn.2016-06.io.spdk", 00:54:11.691 "max_sessions": 128, 00:54:11.691 "max_connections_per_session": 2, 00:54:11.691 "max_queue_depth": 64, 00:54:11.691 "default_time2wait": 2, 00:54:11.691 "default_time2retain": 20, 00:54:11.691 "first_burst_length": 8192, 00:54:11.691 "immediate_data": true, 00:54:11.691 "allow_duplicated_isid": false, 00:54:11.691 "error_recovery_level": 0, 00:54:11.691 "nop_timeout": 60, 00:54:11.691 "nop_in_interval": 30, 00:54:11.691 "disable_chap": false, 00:54:11.691 "require_chap": false, 00:54:11.691 "mutual_chap": false, 00:54:11.691 "chap_group": 0, 00:54:11.691 "max_large_datain_per_connection": 64, 00:54:11.691 "max_r2t_per_connection": 4, 00:54:11.691 "pdu_pool_size": 36864, 00:54:11.691 "immediate_data_pool_size": 16384, 00:54:11.691 "data_out_pool_size": 2048 00:54:11.691 } 00:54:11.691 } 00:54:11.691 ] 00:54:11.691 } 00:54:11.691 ] 00:54:11.691 }' 00:54:11.691 10:05:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:54:11.691 [2024-12-09 10:05:18.720495] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:11.691 [2024-12-09 10:05:18.721559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75668 ] 00:54:11.949 [2024-12-09 10:05:18.905056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:12.207 [2024-12-09 10:05:19.039183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:13.142 [2024-12-09 10:05:20.089314] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:54:13.142 [2024-12-09 10:05:20.090540] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:54:13.142 [2024-12-09 10:05:20.096496] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:54:13.142 [2024-12-09 10:05:20.096608] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:54:13.142 [2024-12-09 10:05:20.096627] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:54:13.142 [2024-12-09 10:05:20.096636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:54:13.142 [2024-12-09 10:05:20.105388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:13.142 [2024-12-09 10:05:20.105414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:13.142 [2024-12-09 10:05:20.112301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:13.142 [2024-12-09 10:05:20.112428] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:54:13.142 [2024-12-09 10:05:20.129297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:54:13.142 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:13.142 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:54:13.142 10:05:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:54:13.142 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:13.142 10:05:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:54:13.142 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75668 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75668 ']' 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75668 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75668 00:54:13.400 killing process with pid 75668 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75668' 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75668 00:54:13.400 10:05:20 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75668 00:54:15.408 [2024-12-09 10:05:21.956976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:54:15.408 [2024-12-09 10:05:21.985392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:15.408 [2024-12-09 10:05:21.985595] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:54:15.408 [2024-12-09 10:05:21.994307] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:15.408 [2024-12-09 10:05:21.994374] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:54:15.408 [2024-12-09 10:05:21.994388] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:54:15.408 [2024-12-09 10:05:21.994420] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:54:15.408 [2024-12-09 10:05:21.994612] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:54:16.784 10:05:23 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:54:16.784 ************************************ 00:54:16.784 END TEST test_save_ublk_config 00:54:16.784 ************************************ 00:54:16.784 00:54:16.784 real 0m10.312s 00:54:16.784 user 0m7.792s 00:54:16.784 sys 0m3.471s 00:54:16.784 10:05:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:16.784 10:05:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:54:17.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:17.043 10:05:23 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75759 00:54:17.043 10:05:23 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:54:17.043 10:05:23 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:17.043 10:05:23 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75759 00:54:17.043 10:05:23 ublk -- common/autotest_common.sh@835 -- # '[' -z 75759 ']' 00:54:17.043 10:05:23 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:17.043 10:05:23 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:17.043 10:05:23 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:17.043 10:05:23 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:17.043 10:05:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:54:17.043 [2024-12-09 10:05:24.018920] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:17.043 [2024-12-09 10:05:24.019329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75759 ] 00:54:17.301 [2024-12-09 10:05:24.254981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:54:17.559 [2024-12-09 10:05:24.408514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:17.559 [2024-12-09 10:05:24.408518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:18.498 10:05:25 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:18.498 10:05:25 ublk -- common/autotest_common.sh@868 -- # return 0 00:54:18.498 10:05:25 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:54:18.498 10:05:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:18.498 10:05:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:18.498 10:05:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:54:18.498 ************************************ 00:54:18.498 START TEST test_create_ublk 00:54:18.498 ************************************ 00:54:18.498 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:54:18.498 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:54:18.498 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:18.498 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:18.498 [2024-12-09 10:05:25.305279] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:54:18.498 [2024-12-09 10:05:25.308175] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:54:18.498 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:18.498 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:54:18.498 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:54:18.498 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:18.498 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:18.762 [2024-12-09 10:05:25.599453] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:54:18.762 [2024-12-09 10:05:25.599971] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:54:18.762 [2024-12-09 10:05:25.600000] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:54:18.762 [2024-12-09 10:05:25.600010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:54:18.762 [2024-12-09 10:05:25.607302] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:18.762 [2024-12-09 10:05:25.607333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:18.762 [2024-12-09 10:05:25.615283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:18.762 [2024-12-09 10:05:25.616052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:54:18.762 [2024-12-09 10:05:25.638296] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:18.762 10:05:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:54:18.762 { 00:54:18.762 "ublk_device": "/dev/ublkb0", 00:54:18.762 "id": 0, 00:54:18.762 "queue_depth": 512, 00:54:18.762 "num_queues": 4, 00:54:18.762 "bdev_name": "Malloc0" 00:54:18.762 } 00:54:18.762 ]' 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:54:18.762 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:54:19.021 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:54:19.021 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:54:19.021 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:54:19.021 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:54:19.021 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:54:19.021 10:05:25 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:54:19.021 10:05:25 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:54:19.021 fio: verification read phase will never start because write phase uses all of runtime 00:54:19.021 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:54:19.021 fio-3.35 00:54:19.021 Starting 1 process 00:54:31.224 00:54:31.224 fio_test: (groupid=0, jobs=1): err= 0: pid=75807: Mon Dec 9 10:05:36 2024 00:54:31.224 write: IOPS=9879, BW=38.6MiB/s (40.5MB/s)(386MiB/10001msec); 0 zone resets 00:54:31.224 clat (usec): min=64, max=4055, avg=99.48, stdev=148.48 00:54:31.224 lat (usec): min=65, max=4056, avg=100.42, stdev=148.50 00:54:31.224 clat percentiles (usec): 00:54:31.224 | 1.00th=[ 72], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:54:31.224 | 30.00th=[ 85], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 88], 00:54:31.225 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 118], 00:54:31.225 | 99.00th=[ 139], 99.50th=[ 243], 99.90th=[ 2999], 99.95th=[ 3359], 00:54:31.225 | 99.99th=[ 3851] 00:54:31.225 bw ( KiB/s): min=34976, max=43392, per=100.00%, avg=39674.11, stdev=2021.60, samples=19 00:54:31.225 iops : min= 8744, max=10848, avg=9918.53, stdev=505.40, samples=19 00:54:31.225 lat (usec) : 100=85.64%, 250=13.87%, 500=0.05%, 750=0.03%, 1000=0.04% 00:54:31.225 lat (msec) : 2=0.16%, 4=0.23%, 10=0.01% 00:54:31.225 cpu : usr=3.43%, sys=8.21%, ctx=98802, majf=0, minf=795 00:54:31.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:31.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:31.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:31.225 issued rwts: total=0,98800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:31.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:54:31.225 00:54:31.225 Run status group 0 (all jobs): 00:54:31.225 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=386MiB (405MB), run=10001-10001msec 00:54:31.225 00:54:31.225 Disk stats (read/write): 00:54:31.225 ublkb0: ios=0/97843, merge=0/0, ticks=0/8802, in_queue=8803, util=99.08% 00:54:31.225 10:05:36 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 [2024-12-09 10:05:36.189811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:54:31.225 [2024-12-09 10:05:36.236370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:31.225 [2024-12-09 10:05:36.237374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:54:31.225 [2024-12-09 10:05:36.247351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:31.225 [2024-12-09 10:05:36.251662] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:54:31.225 [2024-12-09 10:05:36.251707] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:36 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 [2024-12-09 10:05:36.262409] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:54:31.225 request: 00:54:31.225 { 00:54:31.225 "ublk_id": 0, 00:54:31.225 "method": "ublk_stop_disk", 00:54:31.225 "req_id": 1 00:54:31.225 } 00:54:31.225 Got JSON-RPC error response 00:54:31.225 response: 00:54:31.225 { 00:54:31.225 "code": -19, 00:54:31.225 "message": "No such device" 00:54:31.225 } 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:31.225 10:05:36 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 [2024-12-09 10:05:36.277404] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:54:31.225 [2024-12-09 10:05:36.285277] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:54:31.225 [2024-12-09 10:05:36.285330] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:36 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:36 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:54:31.225 10:05:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 10:05:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:54:31.225 10:05:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:54:31.225 10:05:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:54:31.225 10:05:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:54:31.225 10:05:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 10:05:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:54:31.225 10:05:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:54:31.225 10:05:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:54:31.225 00:54:31.225 real 0m11.786s 00:54:31.225 user 0m0.813s 00:54:31.225 sys 0m0.920s 00:54:31.225 10:05:37 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:31.225 10:05:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 ************************************ 00:54:31.225 END TEST test_create_ublk 00:54:31.225 ************************************ 00:54:31.225 10:05:37 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:54:31.225 10:05:37 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:31.225 10:05:37 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:31.225 10:05:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 ************************************ 00:54:31.225 START TEST test_create_multi_ublk 00:54:31.225 ************************************ 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 [2024-12-09 10:05:37.140275] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:54:31.225 [2024-12-09 10:05:37.143058] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.225 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.225 [2024-12-09 10:05:37.455464] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:54:31.225 [2024-12-09 10:05:37.456018] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:54:31.225 [2024-12-09 10:05:37.456042] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:54:31.225 [2024-12-09 10:05:37.456058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:54:31.225 [2024-12-09 10:05:37.464624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:31.226 [2024-12-09 10:05:37.464658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:31.226 [2024-12-09 10:05:37.471287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:31.226 [2024-12-09 10:05:37.472088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:54:31.226 [2024-12-09 10:05:37.482660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.226 [2024-12-09 10:05:37.784486] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:54:31.226 [2024-12-09 10:05:37.785087] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:54:31.226 [2024-12-09 10:05:37.785116] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:54:31.226 [2024-12-09 10:05:37.785136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:54:31.226 [2024-12-09 10:05:37.792332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:31.226 [2024-12-09 10:05:37.792369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:31.226 [2024-12-09 10:05:37.800312] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:31.226 [2024-12-09 10:05:37.801184] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:54:31.226 [2024-12-09 10:05:37.817301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.226 10:05:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.226 [2024-12-09 10:05:38.128448] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:54:31.226 [2024-12-09 10:05:38.129077] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:54:31.226 [2024-12-09 10:05:38.129104] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:54:31.226 [2024-12-09 10:05:38.129120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:54:31.226 [2024-12-09 10:05:38.136309] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:31.226 [2024-12-09 10:05:38.136350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:31.226 [2024-12-09 10:05:38.144300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:31.226 [2024-12-09 10:05:38.145150] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:54:31.226 [2024-12-09 10:05:38.153287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.226 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.484 [2024-12-09 10:05:38.464469] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:54:31.484 [2024-12-09 10:05:38.465012] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:54:31.484 [2024-12-09 10:05:38.465041] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:54:31.484 [2024-12-09 10:05:38.465051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:54:31.484 [2024-12-09 10:05:38.472317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:31.484 [2024-12-09 10:05:38.472352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:31.484 [2024-12-09 10:05:38.480328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:31.484 [2024-12-09 10:05:38.481200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:54:31.484 [2024-12-09 10:05:38.484448] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:54:31.484 { 00:54:31.484 "ublk_device": "/dev/ublkb0", 00:54:31.484 "id": 0, 00:54:31.484 "queue_depth": 512, 00:54:31.484 "num_queues": 4, 00:54:31.484 "bdev_name": "Malloc0" 00:54:31.484 }, 00:54:31.484 { 00:54:31.484 "ublk_device": "/dev/ublkb1", 00:54:31.484 "id": 1, 00:54:31.484 "queue_depth": 512, 00:54:31.484 "num_queues": 4, 00:54:31.484 "bdev_name": "Malloc1" 00:54:31.484 }, 00:54:31.484 { 00:54:31.484 "ublk_device": "/dev/ublkb2", 00:54:31.484 "id": 2, 00:54:31.484 "queue_depth": 512, 00:54:31.484 "num_queues": 4, 00:54:31.484 "bdev_name": "Malloc2" 00:54:31.484 }, 00:54:31.484 { 00:54:31.484 "ublk_device": "/dev/ublkb3", 00:54:31.484 "id": 3, 00:54:31.484 "queue_depth": 512, 00:54:31.484 "num_queues": 4, 00:54:31.484 "bdev_name": "Malloc3" 00:54:31.484 } 00:54:31.484 ]' 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:31.484 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:31.743 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:54:32.001 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:54:32.001 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:54:32.002 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:54:32.002 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:54:32.002 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:54:32.002 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:54:32.002 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:54:32.002 10:05:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:54:32.002 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:54:32.002 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:32.002 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:32.260 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:32.519 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:32.519 [2024-12-09 10:05:39.549426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:54:32.778 [2024-12-09 10:05:39.588870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:32.778 [2024-12-09 10:05:39.590067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:54:32.778 [2024-12-09 10:05:39.596291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:32.778 [2024-12-09 10:05:39.596618] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:54:32.778 [2024-12-09 10:05:39.596637] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:32.778 [2024-12-09 10:05:39.610408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:54:32.778 [2024-12-09 10:05:39.650339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:32.778 [2024-12-09 10:05:39.655375] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:54:32.778 [2024-12-09 10:05:39.659539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:32.778 [2024-12-09 10:05:39.659871] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:54:32.778 [2024-12-09 10:05:39.659891] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:32.778 [2024-12-09 10:05:39.678440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:54:32.778 [2024-12-09 10:05:39.711868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:32.778 [2024-12-09 10:05:39.712915] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:54:32.778 [2024-12-09 10:05:39.723337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:32.778 [2024-12-09 10:05:39.723647] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:54:32.778 [2024-12-09 10:05:39.723670] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:32.778 [2024-12-09 10:05:39.738379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:54:32.778 [2024-12-09 10:05:39.773874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:54:32.778 [2024-12-09 10:05:39.774925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:54:32.778 [2024-12-09 10:05:39.781293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:54:32.778 [2024-12-09 10:05:39.781618] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:54:32.778 [2024-12-09 10:05:39.781644] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:32.778 10:05:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:54:33.345 [2024-12-09 10:05:40.093387] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:54:33.345 [2024-12-09 10:05:40.101271] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:54:33.345 [2024-12-09 10:05:40.101320] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:54:33.345 10:05:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:54:33.345 10:05:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:33.345 10:05:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:54:33.345 10:05:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:33.345 10:05:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:33.912 10:05:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:33.912 10:05:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:33.912 10:05:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:54:33.912 10:05:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:33.912 10:05:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:34.171 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:34.171 10:05:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:34.171 10:05:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:54:34.171 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:34.171 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:34.429 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:34.429 10:05:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:54:34.429 10:05:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:54:34.429 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:34.429 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:54:34.997 00:54:34.997 real 0m4.785s 00:54:34.997 user 0m1.374s 00:54:34.997 sys 0m0.163s 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:34.997 10:05:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:54:34.997 ************************************ 00:54:34.997 END TEST test_create_multi_ublk 00:54:34.997 ************************************ 00:54:34.997 10:05:41 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:54:34.997 10:05:41 ublk -- ublk/ublk.sh@147 -- # cleanup 00:54:34.997 10:05:41 ublk -- ublk/ublk.sh@130 -- # killprocess 75759 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@954 -- # '[' -z 75759 ']' 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@958 -- # kill -0 75759 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@959 -- # uname 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75759 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:34.997 killing process with pid 75759 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75759' 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@973 -- # kill 75759 00:54:34.997 10:05:41 ublk -- common/autotest_common.sh@978 -- # wait 75759 00:54:36.373 [2024-12-09 10:05:43.023807] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:54:36.373 [2024-12-09 10:05:43.023881] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:54:37.308 00:54:37.308 real 0m30.921s 00:54:37.308 user 0m44.113s 00:54:37.308 sys 0m10.907s 00:54:37.308 10:05:44 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:37.308 10:05:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:54:37.308 ************************************ 00:54:37.308 END TEST ublk 00:54:37.308 ************************************ 00:54:37.308 10:05:44 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:54:37.308 10:05:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:37.308 10:05:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:37.308 10:05:44 -- common/autotest_common.sh@10 -- # set +x 00:54:37.308 ************************************ 00:54:37.308 START TEST ublk_recovery 00:54:37.308 ************************************ 00:54:37.308 10:05:44 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:54:37.308 * Looking for test storage... 00:54:37.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:54:37.308 10:05:44 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:37.308 10:05:44 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:54:37.308 10:05:44 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:37.566 10:05:44 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:37.566 10:05:44 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:54:37.566 10:05:44 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:37.566 10:05:44 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:37.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:37.566 --rc genhtml_branch_coverage=1 00:54:37.566 --rc genhtml_function_coverage=1 00:54:37.566 --rc genhtml_legend=1 00:54:37.566 --rc geninfo_all_blocks=1 00:54:37.566 --rc geninfo_unexecuted_blocks=1 00:54:37.567 00:54:37.567 ' 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:37.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:37.567 --rc genhtml_branch_coverage=1 00:54:37.567 --rc genhtml_function_coverage=1 00:54:37.567 --rc genhtml_legend=1 00:54:37.567 --rc geninfo_all_blocks=1 00:54:37.567 --rc geninfo_unexecuted_blocks=1 00:54:37.567 00:54:37.567 ' 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:37.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:37.567 --rc genhtml_branch_coverage=1 00:54:37.567 --rc genhtml_function_coverage=1 00:54:37.567 --rc genhtml_legend=1 00:54:37.567 --rc geninfo_all_blocks=1 00:54:37.567 --rc geninfo_unexecuted_blocks=1 00:54:37.567 00:54:37.567 ' 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:37.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:37.567 --rc genhtml_branch_coverage=1 00:54:37.567 --rc genhtml_function_coverage=1 00:54:37.567 --rc genhtml_legend=1 00:54:37.567 --rc geninfo_all_blocks=1 00:54:37.567 --rc geninfo_unexecuted_blocks=1 00:54:37.567 00:54:37.567 ' 00:54:37.567 10:05:44 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:54:37.567 10:05:44 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:54:37.567 10:05:44 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:54:37.567 10:05:44 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76183 00:54:37.567 10:05:44 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:37.567 10:05:44 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:54:37.567 10:05:44 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76183 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76183 ']' 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:37.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:37.567 10:05:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:37.567 [2024-12-09 10:05:44.579000] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:37.567 [2024-12-09 10:05:44.579193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76183 ] 00:54:37.825 [2024-12-09 10:05:44.766519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:54:38.084 [2024-12-09 10:05:44.915581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:38.084 [2024-12-09 10:05:44.915588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:54:39.019 10:05:45 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:39.019 [2024-12-09 10:05:45.777529] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:54:39.019 [2024-12-09 10:05:45.780525] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:39.019 10:05:45 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:39.019 malloc0 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:39.019 10:05:45 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:39.019 [2024-12-09 10:05:45.926485] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:54:39.019 [2024-12-09 10:05:45.926622] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:54:39.019 [2024-12-09 10:05:45.926643] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:54:39.019 [2024-12-09 10:05:45.926652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:54:39.019 [2024-12-09 10:05:45.935401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:54:39.019 [2024-12-09 10:05:45.935438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:54:39.019 [2024-12-09 10:05:45.942313] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:54:39.019 [2024-12-09 10:05:45.942493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:54:39.019 [2024-12-09 10:05:45.965705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:54:39.019 1 00:54:39.019 10:05:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:39.019 10:05:45 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:54:39.955 10:05:46 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76224 00:54:39.955 10:05:46 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:54:39.955 10:05:46 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:54:40.238 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:54:40.238 fio-3.35 00:54:40.238 Starting 1 process 00:54:45.536 10:05:51 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76183 00:54:45.536 10:05:51 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:54:50.801 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76183 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:54:50.801 10:05:56 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76324 00:54:50.801 10:05:56 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:54:50.801 10:05:56 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:54:50.801 10:05:56 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76324 00:54:50.801 10:05:56 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76324 ']' 00:54:50.801 10:05:56 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:50.801 10:05:56 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:50.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:50.801 10:05:56 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:50.801 10:05:56 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:50.801 10:05:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:50.801 [2024-12-09 10:05:57.102227] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:50.801 [2024-12-09 10:05:57.102395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76324 ] 00:54:50.801 [2024-12-09 10:05:57.278600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:54:50.801 [2024-12-09 10:05:57.408922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:50.801 [2024-12-09 10:05:57.408938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:54:51.369 10:05:58 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:51.369 [2024-12-09 10:05:58.273276] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:54:51.369 [2024-12-09 10:05:58.276112] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:51.369 10:05:58 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:51.369 10:05:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:51.369 malloc0 00:54:51.628 10:05:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:51.628 10:05:58 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:54:51.628 10:05:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:51.628 10:05:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:54:51.628 [2024-12-09 10:05:58.417813] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:54:51.628 [2024-12-09 10:05:58.417867] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:54:51.628 [2024-12-09 10:05:58.417883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:54:51.628 [2024-12-09 10:05:58.425322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:54:51.628 [2024-12-09 10:05:58.425353] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:54:51.628 1 00:54:51.628 10:05:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:51.628 10:05:58 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76224 00:54:52.563 [2024-12-09 10:05:59.425396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:54:52.563 [2024-12-09 10:05:59.433301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:54:52.563 [2024-12-09 10:05:59.433328] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:54:53.497 [2024-12-09 10:06:00.433365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:54:53.497 [2024-12-09 10:06:00.441283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:54:53.497 [2024-12-09 10:06:00.441311] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:54:54.431 [2024-12-09 10:06:01.441344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:54:54.431 [2024-12-09 10:06:01.449292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:54:54.431 [2024-12-09 10:06:01.449321] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:54:54.431 [2024-12-09 10:06:01.449337] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:54:54.431 [2024-12-09 10:06:01.449452] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:55:16.359 [2024-12-09 10:06:22.161330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:55:16.359 [2024-12-09 10:06:22.169068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:55:16.359 [2024-12-09 10:06:22.176543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:55:16.359 [2024-12-09 10:06:22.176573] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:55:42.900 00:55:42.900 fio_test: (groupid=0, jobs=1): err= 0: pid=76227: Mon Dec 9 10:06:47 2024 00:55:42.900 read: IOPS=9722, BW=38.0MiB/s (39.8MB/s)(2279MiB/60003msec) 00:55:42.900 slat (nsec): min=1750, max=472918, avg=6616.02, stdev=3123.90 00:55:42.900 clat (usec): min=1397, max=30205k, avg=6937.58, stdev=335475.49 00:55:42.900 lat (usec): min=1405, max=30205k, avg=6944.20, stdev=335475.50 00:55:42.900 clat percentiles (msec): 00:55:42.900 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:55:42.900 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:55:42.900 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:55:42.900 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 13], 00:55:42.900 | 99.99th=[17113] 00:55:42.900 bw ( KiB/s): min= 4544, max=84728, per=100.00%, avg=76626.03, stdev=13220.49, samples=60 00:55:42.900 iops : min= 1136, max=21182, avg=19156.47, stdev=3305.11, samples=60 00:55:42.900 write: IOPS=9714, BW=37.9MiB/s (39.8MB/s)(2277MiB/60003msec); 0 zone resets 00:55:42.900 slat (nsec): min=1837, max=3202.2k, avg=6815.31, stdev=5225.38 00:55:42.900 clat (usec): min=1129, max=30205k, avg=6216.64, stdev=295990.28 00:55:42.900 lat (usec): min=1137, max=30205k, avg=6223.45, stdev=295990.29 00:55:42.900 clat percentiles (usec): 00:55:42.900 | 1.00th=[ 2638], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:55:42.900 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:55:42.900 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3589], 95.00th=[ 4424], 00:55:42.900 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 8717], 99.95th=[12649], 00:55:42.900 | 99.99th=[13829] 00:55:42.900 bw ( KiB/s): min= 4632, max=84784, per=100.00%, avg=76517.70, stdev=13147.47, samples=60 00:55:42.900 iops : min= 1158, max=21196, avg=19129.38, stdev=3286.86, samples=60 00:55:42.900 lat (msec) : 2=0.04%, 4=93.41%, 10=6.48%, 20=0.05%, >=2000=0.01% 00:55:42.900 cpu : usr=5.61%, sys=12.49%, ctx=39222, majf=0, minf=13 00:55:42.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:55:42.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:42.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:55:42.900 issued rwts: total=583408,582919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:42.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:55:42.900 00:55:42.900 Run status group 0 (all jobs): 00:55:42.900 READ: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=2279MiB (2390MB), run=60003-60003msec 00:55:42.900 WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=2277MiB (2388MB), run=60003-60003msec 00:55:42.900 00:55:42.900 Disk stats (read/write): 00:55:42.900 ublkb1: ios=581239/580553, merge=0/0, ticks=3985558/3494604, in_queue=7480163, util=99.95% 00:55:42.900 10:06:47 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:55:42.900 10:06:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:55:42.901 [2024-12-09 10:06:47.251247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:55:42.901 [2024-12-09 10:06:47.291411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:55:42.901 [2024-12-09 10:06:47.291685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:55:42.901 [2024-12-09 10:06:47.298300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:55:42.901 [2024-12-09 10:06:47.298444] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:55:42.901 [2024-12-09 10:06:47.298458] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.901 10:06:47 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:55:42.901 [2024-12-09 10:06:47.314419] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:55:42.901 [2024-12-09 10:06:47.324519] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:55:42.901 [2024-12-09 10:06:47.324583] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.901 10:06:47 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:55:42.901 10:06:47 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:55:42.901 10:06:47 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76324 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76324 ']' 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76324 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76324 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:42.901 killing process with pid 76324 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76324' 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76324 00:55:42.901 10:06:47 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76324 00:55:42.901 [2024-12-09 10:06:48.903425] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:55:42.901 [2024-12-09 10:06:48.903511] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:55:43.467 00:55:43.467 real 1m6.062s 00:55:43.467 user 1m51.405s 00:55:43.467 sys 0m20.765s 00:55:43.467 10:06:50 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:43.467 10:06:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.467 ************************************ 00:55:43.467 END TEST ublk_recovery 00:55:43.467 ************************************ 00:55:43.467 10:06:50 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:55:43.467 10:06:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:55:43.467 10:06:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:43.467 10:06:50 -- common/autotest_common.sh@10 -- # set +x 00:55:43.467 10:06:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:55:43.467 10:06:50 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:55:43.467 10:06:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:43.467 10:06:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:43.467 10:06:50 -- common/autotest_common.sh@10 -- # set +x 00:55:43.467 ************************************ 00:55:43.467 START TEST ftl 00:55:43.467 ************************************ 00:55:43.467 10:06:50 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:55:43.726 * Looking for test storage... 00:55:43.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:43.726 10:06:50 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:43.726 10:06:50 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:55:43.726 10:06:50 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:55:43.726 10:06:50 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:55:43.726 10:06:50 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:43.726 10:06:50 ftl -- scripts/common.sh@344 -- # case "$op" in 00:55:43.726 10:06:50 ftl -- scripts/common.sh@345 -- # : 1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:43.726 10:06:50 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:43.726 10:06:50 ftl -- scripts/common.sh@365 -- # decimal 1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@353 -- # local d=1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:43.726 10:06:50 ftl -- scripts/common.sh@355 -- # echo 1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:55:43.726 10:06:50 ftl -- scripts/common.sh@366 -- # decimal 2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@353 -- # local d=2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:43.726 10:06:50 ftl -- scripts/common.sh@355 -- # echo 2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:55:43.726 10:06:50 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:43.726 10:06:50 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:43.726 10:06:50 ftl -- scripts/common.sh@368 -- # return 0 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:43.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:43.726 --rc genhtml_branch_coverage=1 00:55:43.726 --rc genhtml_function_coverage=1 00:55:43.726 --rc genhtml_legend=1 00:55:43.726 --rc geninfo_all_blocks=1 00:55:43.726 --rc geninfo_unexecuted_blocks=1 00:55:43.726 00:55:43.726 ' 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:43.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:43.726 --rc genhtml_branch_coverage=1 00:55:43.726 --rc genhtml_function_coverage=1 00:55:43.726 --rc genhtml_legend=1 00:55:43.726 --rc geninfo_all_blocks=1 00:55:43.726 --rc geninfo_unexecuted_blocks=1 00:55:43.726 00:55:43.726 ' 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:43.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:43.726 --rc genhtml_branch_coverage=1 00:55:43.726 --rc genhtml_function_coverage=1 00:55:43.726 --rc genhtml_legend=1 00:55:43.726 --rc geninfo_all_blocks=1 00:55:43.726 --rc geninfo_unexecuted_blocks=1 00:55:43.726 00:55:43.726 ' 00:55:43.726 10:06:50 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:43.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:43.726 --rc genhtml_branch_coverage=1 00:55:43.726 --rc genhtml_function_coverage=1 00:55:43.726 --rc genhtml_legend=1 00:55:43.726 --rc geninfo_all_blocks=1 00:55:43.726 --rc geninfo_unexecuted_blocks=1 00:55:43.726 00:55:43.726 ' 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:55:43.726 10:06:50 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:55:43.726 10:06:50 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:55:43.726 10:06:50 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:55:43.726 10:06:50 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:55:43.726 10:06:50 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:55:43.726 10:06:50 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:43.726 10:06:50 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:55:43.726 10:06:50 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:55:43.726 10:06:50 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:43.726 10:06:50 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:43.726 10:06:50 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:55:43.726 10:06:50 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:55:43.726 10:06:50 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:55:43.726 10:06:50 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:55:43.726 10:06:50 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:55:43.726 10:06:50 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:55:43.726 10:06:50 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:43.726 10:06:50 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:43.726 10:06:50 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:55:43.726 10:06:50 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:55:43.726 10:06:50 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:55:43.726 10:06:50 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:55:43.726 10:06:50 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:55:43.726 10:06:50 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:55:43.726 10:06:50 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:55:43.726 10:06:50 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:55:43.726 10:06:50 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:55:43.726 10:06:50 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:55:43.726 10:06:50 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:55:43.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:44.260 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:55:44.260 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:55:44.260 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:55:44.260 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:55:44.260 10:06:51 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77124 00:55:44.260 10:06:51 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:55:44.260 10:06:51 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77124 00:55:44.260 10:06:51 ftl -- common/autotest_common.sh@835 -- # '[' -z 77124 ']' 00:55:44.260 10:06:51 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:44.260 10:06:51 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:44.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:44.260 10:06:51 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:44.260 10:06:51 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:44.260 10:06:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:55:44.518 [2024-12-09 10:06:51.392234] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:55:44.518 [2024-12-09 10:06:51.392428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77124 ] 00:55:44.777 [2024-12-09 10:06:51.574901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:44.777 [2024-12-09 10:06:51.705798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:45.342 10:06:52 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:45.342 10:06:52 ftl -- common/autotest_common.sh@868 -- # return 0 00:55:45.342 10:06:52 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:55:45.907 10:06:52 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:55:46.915 10:06:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:55:46.915 10:06:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:55:47.482 10:06:54 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:55:47.482 10:06:54 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:55:47.482 10:06:54 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@50 -- # break 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:55:47.740 10:06:54 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:55:47.998 10:06:54 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:55:47.998 10:06:54 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:55:47.998 10:06:54 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:55:47.998 10:06:54 ftl -- ftl/ftl.sh@63 -- # break 00:55:47.998 10:06:54 ftl -- ftl/ftl.sh@66 -- # killprocess 77124 00:55:47.998 10:06:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 77124 ']' 00:55:47.998 10:06:54 ftl -- common/autotest_common.sh@958 -- # kill -0 77124 00:55:47.998 10:06:54 ftl -- common/autotest_common.sh@959 -- # uname 00:55:47.998 10:06:54 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:47.998 10:06:54 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77124 00:55:47.998 killing process with pid 77124 00:55:47.998 10:06:55 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:47.998 10:06:55 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:47.998 10:06:55 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77124' 00:55:47.998 10:06:55 ftl -- common/autotest_common.sh@973 -- # kill 77124 00:55:47.998 10:06:55 ftl -- common/autotest_common.sh@978 -- # wait 77124 00:55:50.528 10:06:57 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:55:50.528 10:06:57 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:55:50.528 10:06:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:55:50.528 10:06:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:50.528 10:06:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:55:50.528 ************************************ 00:55:50.528 START TEST ftl_fio_basic 00:55:50.528 ************************************ 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:55:50.528 * Looking for test storage... 00:55:50.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:55:50.528 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:50.529 --rc genhtml_branch_coverage=1 00:55:50.529 --rc genhtml_function_coverage=1 00:55:50.529 --rc genhtml_legend=1 00:55:50.529 --rc geninfo_all_blocks=1 00:55:50.529 --rc geninfo_unexecuted_blocks=1 00:55:50.529 00:55:50.529 ' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:50.529 --rc genhtml_branch_coverage=1 00:55:50.529 --rc genhtml_function_coverage=1 00:55:50.529 --rc genhtml_legend=1 00:55:50.529 --rc geninfo_all_blocks=1 00:55:50.529 --rc geninfo_unexecuted_blocks=1 00:55:50.529 00:55:50.529 ' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:50.529 --rc genhtml_branch_coverage=1 00:55:50.529 --rc genhtml_function_coverage=1 00:55:50.529 --rc genhtml_legend=1 00:55:50.529 --rc geninfo_all_blocks=1 00:55:50.529 --rc geninfo_unexecuted_blocks=1 00:55:50.529 00:55:50.529 ' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:50.529 --rc genhtml_branch_coverage=1 00:55:50.529 --rc genhtml_function_coverage=1 00:55:50.529 --rc genhtml_legend=1 00:55:50.529 --rc geninfo_all_blocks=1 00:55:50.529 --rc geninfo_unexecuted_blocks=1 00:55:50.529 00:55:50.529 ' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77273 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77273 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77273 ']' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:50.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:50.529 10:06:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:55:50.787 [2024-12-09 10:06:57.684773] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:55:50.787 [2024-12-09 10:06:57.685224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77273 ] 00:55:51.046 [2024-12-09 10:06:57.876314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:55:51.046 [2024-12-09 10:06:58.019019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:51.046 [2024-12-09 10:06:58.019157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:51.046 [2024-12-09 10:06:58.019177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:55:51.979 10:06:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:55:52.237 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:55:52.496 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:55:52.496 { 00:55:52.496 "name": "nvme0n1", 00:55:52.496 "aliases": [ 00:55:52.496 "675e7495-7476-416b-90a8-1943a66f4326" 00:55:52.496 ], 00:55:52.496 "product_name": "NVMe disk", 00:55:52.496 "block_size": 4096, 00:55:52.496 "num_blocks": 1310720, 00:55:52.496 "uuid": "675e7495-7476-416b-90a8-1943a66f4326", 00:55:52.496 "numa_id": -1, 00:55:52.496 "assigned_rate_limits": { 00:55:52.496 "rw_ios_per_sec": 0, 00:55:52.496 "rw_mbytes_per_sec": 0, 00:55:52.496 "r_mbytes_per_sec": 0, 00:55:52.496 "w_mbytes_per_sec": 0 00:55:52.496 }, 00:55:52.496 "claimed": false, 00:55:52.496 "zoned": false, 00:55:52.496 "supported_io_types": { 00:55:52.496 "read": true, 00:55:52.496 "write": true, 00:55:52.496 "unmap": true, 00:55:52.496 "flush": true, 00:55:52.496 "reset": true, 00:55:52.496 "nvme_admin": true, 00:55:52.496 "nvme_io": true, 00:55:52.496 "nvme_io_md": false, 00:55:52.496 "write_zeroes": true, 00:55:52.496 "zcopy": false, 00:55:52.496 "get_zone_info": false, 00:55:52.496 "zone_management": false, 00:55:52.496 "zone_append": false, 00:55:52.496 "compare": true, 00:55:52.496 "compare_and_write": false, 00:55:52.496 "abort": true, 00:55:52.496 "seek_hole": false, 00:55:52.496 "seek_data": false, 00:55:52.496 "copy": true, 00:55:52.496 "nvme_iov_md": false 00:55:52.496 }, 00:55:52.496 "driver_specific": { 00:55:52.496 "nvme": [ 00:55:52.496 { 00:55:52.496 "pci_address": "0000:00:11.0", 00:55:52.496 "trid": { 00:55:52.496 "trtype": "PCIe", 00:55:52.496 "traddr": "0000:00:11.0" 00:55:52.496 }, 00:55:52.496 "ctrlr_data": { 00:55:52.496 "cntlid": 0, 00:55:52.496 "vendor_id": "0x1b36", 00:55:52.496 "model_number": "QEMU NVMe Ctrl", 00:55:52.496 "serial_number": "12341", 00:55:52.496 "firmware_revision": "8.0.0", 00:55:52.496 "subnqn": "nqn.2019-08.org.qemu:12341", 00:55:52.496 "oacs": { 00:55:52.496 "security": 0, 00:55:52.496 "format": 1, 00:55:52.496 "firmware": 0, 00:55:52.496 "ns_manage": 1 00:55:52.496 }, 00:55:52.496 "multi_ctrlr": false, 00:55:52.496 "ana_reporting": false 00:55:52.496 }, 00:55:52.496 "vs": { 00:55:52.496 "nvme_version": "1.4" 00:55:52.496 }, 00:55:52.496 "ns_data": { 00:55:52.496 "id": 1, 00:55:52.496 "can_share": false 00:55:52.496 } 00:55:52.496 } 00:55:52.496 ], 00:55:52.496 "mp_policy": "active_passive" 00:55:52.496 } 00:55:52.496 } 00:55:52.496 ]' 00:55:52.496 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:55:52.753 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:55:52.754 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:55:53.012 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:55:53.012 10:06:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:55:53.270 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ddbf3344-4afc-472c-a9e3-23651aa1de0e 00:55:53.270 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ddbf3344-4afc-472c-a9e3-23651aa1de0e 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=3f86784b-f2fd-46a5-9529-b872b634786c 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=3f86784b-f2fd-46a5-9529-b872b634786c 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3f86784b-f2fd-46a5-9529-b872b634786c 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:55:53.527 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:53.786 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:55:53.786 { 00:55:53.786 "name": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:55:53.786 "aliases": [ 00:55:53.786 "lvs/nvme0n1p0" 00:55:53.786 ], 00:55:53.786 "product_name": "Logical Volume", 00:55:53.786 "block_size": 4096, 00:55:53.786 "num_blocks": 26476544, 00:55:53.786 "uuid": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:55:53.786 "assigned_rate_limits": { 00:55:53.786 "rw_ios_per_sec": 0, 00:55:53.786 "rw_mbytes_per_sec": 0, 00:55:53.786 "r_mbytes_per_sec": 0, 00:55:53.786 "w_mbytes_per_sec": 0 00:55:53.786 }, 00:55:53.786 "claimed": false, 00:55:53.786 "zoned": false, 00:55:53.786 "supported_io_types": { 00:55:53.786 "read": true, 00:55:53.786 "write": true, 00:55:53.786 "unmap": true, 00:55:53.786 "flush": false, 00:55:53.786 "reset": true, 00:55:53.786 "nvme_admin": false, 00:55:53.786 "nvme_io": false, 00:55:53.786 "nvme_io_md": false, 00:55:53.786 "write_zeroes": true, 00:55:53.786 "zcopy": false, 00:55:53.786 "get_zone_info": false, 00:55:53.786 "zone_management": false, 00:55:53.786 "zone_append": false, 00:55:53.786 "compare": false, 00:55:53.786 "compare_and_write": false, 00:55:53.786 "abort": false, 00:55:53.786 "seek_hole": true, 00:55:53.786 "seek_data": true, 00:55:53.786 "copy": false, 00:55:53.786 "nvme_iov_md": false 00:55:53.786 }, 00:55:53.786 "driver_specific": { 00:55:53.786 "lvol": { 00:55:53.786 "lvol_store_uuid": "ddbf3344-4afc-472c-a9e3-23651aa1de0e", 00:55:53.786 "base_bdev": "nvme0n1", 00:55:53.786 "thin_provision": true, 00:55:53.786 "num_allocated_clusters": 0, 00:55:53.786 "snapshot": false, 00:55:53.786 "clone": false, 00:55:53.786 "esnap_clone": false 00:55:53.786 } 00:55:53.786 } 00:55:53.786 } 00:55:53.786 ]' 00:55:53.786 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:55:54.044 10:07:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3f86784b-f2fd-46a5-9529-b872b634786c 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:55:54.302 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:54.560 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:55:54.560 { 00:55:54.560 "name": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:55:54.560 "aliases": [ 00:55:54.560 "lvs/nvme0n1p0" 00:55:54.560 ], 00:55:54.560 "product_name": "Logical Volume", 00:55:54.560 "block_size": 4096, 00:55:54.560 "num_blocks": 26476544, 00:55:54.560 "uuid": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:55:54.560 "assigned_rate_limits": { 00:55:54.560 "rw_ios_per_sec": 0, 00:55:54.560 "rw_mbytes_per_sec": 0, 00:55:54.560 "r_mbytes_per_sec": 0, 00:55:54.560 "w_mbytes_per_sec": 0 00:55:54.560 }, 00:55:54.560 "claimed": false, 00:55:54.560 "zoned": false, 00:55:54.560 "supported_io_types": { 00:55:54.560 "read": true, 00:55:54.560 "write": true, 00:55:54.560 "unmap": true, 00:55:54.560 "flush": false, 00:55:54.560 "reset": true, 00:55:54.560 "nvme_admin": false, 00:55:54.560 "nvme_io": false, 00:55:54.560 "nvme_io_md": false, 00:55:54.560 "write_zeroes": true, 00:55:54.560 "zcopy": false, 00:55:54.560 "get_zone_info": false, 00:55:54.560 "zone_management": false, 00:55:54.560 "zone_append": false, 00:55:54.561 "compare": false, 00:55:54.561 "compare_and_write": false, 00:55:54.561 "abort": false, 00:55:54.561 "seek_hole": true, 00:55:54.561 "seek_data": true, 00:55:54.561 "copy": false, 00:55:54.561 "nvme_iov_md": false 00:55:54.561 }, 00:55:54.561 "driver_specific": { 00:55:54.561 "lvol": { 00:55:54.561 "lvol_store_uuid": "ddbf3344-4afc-472c-a9e3-23651aa1de0e", 00:55:54.561 "base_bdev": "nvme0n1", 00:55:54.561 "thin_provision": true, 00:55:54.561 "num_allocated_clusters": 0, 00:55:54.561 "snapshot": false, 00:55:54.561 "clone": false, 00:55:54.561 "esnap_clone": false 00:55:54.561 } 00:55:54.561 } 00:55:54.561 } 00:55:54.561 ]' 00:55:54.561 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:55:54.561 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:55:54.561 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:55:54.819 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:55:54.819 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:55:54.819 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:55:54.819 10:07:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:55:54.819 10:07:01 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:55:55.077 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3f86784b-f2fd-46a5-9529-b872b634786c 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:55:55.077 10:07:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f86784b-f2fd-46a5-9529-b872b634786c 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:55:55.336 { 00:55:55.336 "name": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:55:55.336 "aliases": [ 00:55:55.336 "lvs/nvme0n1p0" 00:55:55.336 ], 00:55:55.336 "product_name": "Logical Volume", 00:55:55.336 "block_size": 4096, 00:55:55.336 "num_blocks": 26476544, 00:55:55.336 "uuid": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:55:55.336 "assigned_rate_limits": { 00:55:55.336 "rw_ios_per_sec": 0, 00:55:55.336 "rw_mbytes_per_sec": 0, 00:55:55.336 "r_mbytes_per_sec": 0, 00:55:55.336 "w_mbytes_per_sec": 0 00:55:55.336 }, 00:55:55.336 "claimed": false, 00:55:55.336 "zoned": false, 00:55:55.336 "supported_io_types": { 00:55:55.336 "read": true, 00:55:55.336 "write": true, 00:55:55.336 "unmap": true, 00:55:55.336 "flush": false, 00:55:55.336 "reset": true, 00:55:55.336 "nvme_admin": false, 00:55:55.336 "nvme_io": false, 00:55:55.336 "nvme_io_md": false, 00:55:55.336 "write_zeroes": true, 00:55:55.336 "zcopy": false, 00:55:55.336 "get_zone_info": false, 00:55:55.336 "zone_management": false, 00:55:55.336 "zone_append": false, 00:55:55.336 "compare": false, 00:55:55.336 "compare_and_write": false, 00:55:55.336 "abort": false, 00:55:55.336 "seek_hole": true, 00:55:55.336 "seek_data": true, 00:55:55.336 "copy": false, 00:55:55.336 "nvme_iov_md": false 00:55:55.336 }, 00:55:55.336 "driver_specific": { 00:55:55.336 "lvol": { 00:55:55.336 "lvol_store_uuid": "ddbf3344-4afc-472c-a9e3-23651aa1de0e", 00:55:55.336 "base_bdev": "nvme0n1", 00:55:55.336 "thin_provision": true, 00:55:55.336 "num_allocated_clusters": 0, 00:55:55.336 "snapshot": false, 00:55:55.336 "clone": false, 00:55:55.336 "esnap_clone": false 00:55:55.336 } 00:55:55.336 } 00:55:55.336 } 00:55:55.336 ]' 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:55:55.336 10:07:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3f86784b-f2fd-46a5-9529-b872b634786c -c nvc0n1p0 --l2p_dram_limit 60 00:55:55.595 [2024-12-09 10:07:02.568521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.568587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:55:55.595 [2024-12-09 10:07:02.568616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:55:55.595 [2024-12-09 10:07:02.568632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.568737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.568761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:55:55.595 [2024-12-09 10:07:02.568783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:55:55.595 [2024-12-09 10:07:02.568798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.568856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:55:55.595 [2024-12-09 10:07:02.569900] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:55:55.595 [2024-12-09 10:07:02.569944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.569960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:55:55.595 [2024-12-09 10:07:02.569979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:55:55.595 [2024-12-09 10:07:02.570003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.570179] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 21838814-59a7-4409-b646-9b1c76dd8330 00:55:55.595 [2024-12-09 10:07:02.572162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.572402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:55:55.595 [2024-12-09 10:07:02.572436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:55:55.595 [2024-12-09 10:07:02.572455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.582239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.582326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:55:55.595 [2024-12-09 10:07:02.582348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.674 ms 00:55:55.595 [2024-12-09 10:07:02.582365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.582527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.582554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:55:55.595 [2024-12-09 10:07:02.582570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:55:55.595 [2024-12-09 10:07:02.582593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.582696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.582721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:55:55.595 [2024-12-09 10:07:02.582738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:55:55.595 [2024-12-09 10:07:02.582754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.582797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:55:55.595 [2024-12-09 10:07:02.588165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.588207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:55:55.595 [2024-12-09 10:07:02.588229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.372 ms 00:55:55.595 [2024-12-09 10:07:02.588263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.588342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.588362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:55:55.595 [2024-12-09 10:07:02.588380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:55:55.595 [2024-12-09 10:07:02.588394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.588459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:55:55.595 [2024-12-09 10:07:02.588661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:55:55.595 [2024-12-09 10:07:02.588703] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:55:55.595 [2024-12-09 10:07:02.588724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:55:55.595 [2024-12-09 10:07:02.588752] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:55:55.595 [2024-12-09 10:07:02.588770] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:55:55.595 [2024-12-09 10:07:02.588793] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:55:55.595 [2024-12-09 10:07:02.588820] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:55:55.595 [2024-12-09 10:07:02.588840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:55:55.595 [2024-12-09 10:07:02.588855] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:55:55.595 [2024-12-09 10:07:02.588880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.588908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:55:55.595 [2024-12-09 10:07:02.588929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:55:55.595 [2024-12-09 10:07:02.588944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.589064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.595 [2024-12-09 10:07:02.589089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:55:55.595 [2024-12-09 10:07:02.589108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:55:55.595 [2024-12-09 10:07:02.589122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.595 [2024-12-09 10:07:02.589281] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:55:55.595 [2024-12-09 10:07:02.589304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:55:55.595 [2024-12-09 10:07:02.589326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:55:55.595 [2024-12-09 10:07:02.589340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:55.595 [2024-12-09 10:07:02.589357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:55:55.595 [2024-12-09 10:07:02.589371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:55:55.595 [2024-12-09 10:07:02.589389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:55:55.595 [2024-12-09 10:07:02.589403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:55:55.595 [2024-12-09 10:07:02.589418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:55:55.595 [2024-12-09 10:07:02.589432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:55:55.595 [2024-12-09 10:07:02.589448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:55:55.595 [2024-12-09 10:07:02.589461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:55:55.595 [2024-12-09 10:07:02.589476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:55:55.595 [2024-12-09 10:07:02.589489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:55:55.595 [2024-12-09 10:07:02.589505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:55:55.595 [2024-12-09 10:07:02.589518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:55.595 [2024-12-09 10:07:02.589536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:55:55.595 [2024-12-09 10:07:02.589556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:55:55.595 [2024-12-09 10:07:02.589573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:55.595 [2024-12-09 10:07:02.589586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:55:55.595 [2024-12-09 10:07:02.589602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:55:55.595 [2024-12-09 10:07:02.589616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:55.596 [2024-12-09 10:07:02.589632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:55:55.596 [2024-12-09 10:07:02.589646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:55:55.596 [2024-12-09 10:07:02.589662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:55.596 [2024-12-09 10:07:02.589675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:55:55.596 [2024-12-09 10:07:02.589691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:55:55.596 [2024-12-09 10:07:02.589705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:55.596 [2024-12-09 10:07:02.589720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:55:55.596 [2024-12-09 10:07:02.589733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:55:55.596 [2024-12-09 10:07:02.589749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:55.596 [2024-12-09 10:07:02.589762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:55:55.596 [2024-12-09 10:07:02.589782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:55:55.596 [2024-12-09 10:07:02.589826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:55:55.596 [2024-12-09 10:07:02.589844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:55:55.596 [2024-12-09 10:07:02.589858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:55:55.596 [2024-12-09 10:07:02.589874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:55:55.596 [2024-12-09 10:07:02.589887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:55:55.596 [2024-12-09 10:07:02.589903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:55:55.596 [2024-12-09 10:07:02.589916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:55.596 [2024-12-09 10:07:02.589932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:55:55.596 [2024-12-09 10:07:02.589945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:55:55.596 [2024-12-09 10:07:02.589961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:55.596 [2024-12-09 10:07:02.589973] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:55:55.596 [2024-12-09 10:07:02.589990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:55:55.596 [2024-12-09 10:07:02.590004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:55:55.596 [2024-12-09 10:07:02.590023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:55.596 [2024-12-09 10:07:02.590038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:55:55.596 [2024-12-09 10:07:02.590056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:55:55.596 [2024-12-09 10:07:02.590075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:55:55.596 [2024-12-09 10:07:02.590092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:55:55.596 [2024-12-09 10:07:02.590105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:55:55.596 [2024-12-09 10:07:02.590121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:55:55.596 [2024-12-09 10:07:02.590137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:55:55.596 [2024-12-09 10:07:02.590157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:55:55.596 [2024-12-09 10:07:02.590191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:55:55.596 [2024-12-09 10:07:02.590206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:55:55.596 [2024-12-09 10:07:02.590225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:55:55.596 [2024-12-09 10:07:02.590239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:55:55.596 [2024-12-09 10:07:02.590270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:55:55.596 [2024-12-09 10:07:02.590287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:55:55.596 [2024-12-09 10:07:02.590304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:55:55.596 [2024-12-09 10:07:02.590318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:55:55.596 [2024-12-09 10:07:02.590338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:55:55.596 [2024-12-09 10:07:02.590416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:55:55.596 [2024-12-09 10:07:02.590434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:55:55.596 [2024-12-09 10:07:02.590470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:55:55.596 [2024-12-09 10:07:02.590485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:55:55.596 [2024-12-09 10:07:02.590503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:55:55.596 [2024-12-09 10:07:02.590518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:55.596 [2024-12-09 10:07:02.590535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:55:55.596 [2024-12-09 10:07:02.590550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:55:55.596 [2024-12-09 10:07:02.590572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:55.596 [2024-12-09 10:07:02.590675] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:55:55.596 [2024-12-09 10:07:02.590711] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:55:59.783 [2024-12-09 10:07:06.048381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.048473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:55:59.783 [2024-12-09 10:07:06.048500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3457.736 ms 00:55:59.783 [2024-12-09 10:07:06.048518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.090107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.090182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:55:59.783 [2024-12-09 10:07:06.090207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.263 ms 00:55:59.783 [2024-12-09 10:07:06.090226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.090448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.090478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:55:59.783 [2024-12-09 10:07:06.090496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:55:59.783 [2024-12-09 10:07:06.090516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.152882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.152962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:55:59.783 [2024-12-09 10:07:06.152987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.293 ms 00:55:59.783 [2024-12-09 10:07:06.153008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.153078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.153100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:55:59.783 [2024-12-09 10:07:06.153117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:55:59.783 [2024-12-09 10:07:06.153134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.153905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.153950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:55:59.783 [2024-12-09 10:07:06.153973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:55:59.783 [2024-12-09 10:07:06.153990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.154179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.154204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:55:59.783 [2024-12-09 10:07:06.154219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:55:59.783 [2024-12-09 10:07:06.154238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.177157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.177221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:55:59.783 [2024-12-09 10:07:06.177243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.865 ms 00:55:59.783 [2024-12-09 10:07:06.177283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.192335] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:55:59.783 [2024-12-09 10:07:06.214845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.214924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:55:59.783 [2024-12-09 10:07:06.214957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.386 ms 00:55:59.783 [2024-12-09 10:07:06.214973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.287271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.287359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:55:59.783 [2024-12-09 10:07:06.287408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.208 ms 00:55:59.783 [2024-12-09 10:07:06.287422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.287727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.287758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:55:59.783 [2024-12-09 10:07:06.287780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:55:59.783 [2024-12-09 10:07:06.287795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.783 [2024-12-09 10:07:06.320765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.783 [2024-12-09 10:07:06.320818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:55:59.783 [2024-12-09 10:07:06.320856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.877 ms 00:55:59.783 [2024-12-09 10:07:06.320871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.352711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.352759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:55:59.784 [2024-12-09 10:07:06.352784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.771 ms 00:55:59.784 [2024-12-09 10:07:06.352799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.353694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.353731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:55:59.784 [2024-12-09 10:07:06.353753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:55:59.784 [2024-12-09 10:07:06.353769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.443737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.443834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:55:59.784 [2024-12-09 10:07:06.443870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.871 ms 00:55:59.784 [2024-12-09 10:07:06.443885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.478685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.478936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:55:59.784 [2024-12-09 10:07:06.478977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.657 ms 00:55:59.784 [2024-12-09 10:07:06.478995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.511477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.511524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:55:59.784 [2024-12-09 10:07:06.511548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.401 ms 00:55:59.784 [2024-12-09 10:07:06.511563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.543776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.543854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:55:59.784 [2024-12-09 10:07:06.543913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.147 ms 00:55:59.784 [2024-12-09 10:07:06.543937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.544012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.544033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:55:59.784 [2024-12-09 10:07:06.544058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:55:59.784 [2024-12-09 10:07:06.544073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.544268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:59.784 [2024-12-09 10:07:06.544294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:55:59.784 [2024-12-09 10:07:06.544313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:55:59.784 [2024-12-09 10:07:06.544335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:59.784 [2024-12-09 10:07:06.545828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3976.755 ms, result 0 00:55:59.784 { 00:55:59.784 "name": "ftl0", 00:55:59.784 "uuid": "21838814-59a7-4409-b646-9b1c76dd8330" 00:55:59.784 } 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:55:59.784 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:56:00.042 10:07:06 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:56:00.300 [ 00:56:00.300 { 00:56:00.300 "name": "ftl0", 00:56:00.300 "aliases": [ 00:56:00.300 "21838814-59a7-4409-b646-9b1c76dd8330" 00:56:00.300 ], 00:56:00.300 "product_name": "FTL disk", 00:56:00.300 "block_size": 4096, 00:56:00.300 "num_blocks": 20971520, 00:56:00.300 "uuid": "21838814-59a7-4409-b646-9b1c76dd8330", 00:56:00.300 "assigned_rate_limits": { 00:56:00.300 "rw_ios_per_sec": 0, 00:56:00.300 "rw_mbytes_per_sec": 0, 00:56:00.300 "r_mbytes_per_sec": 0, 00:56:00.300 "w_mbytes_per_sec": 0 00:56:00.300 }, 00:56:00.300 "claimed": false, 00:56:00.300 "zoned": false, 00:56:00.300 "supported_io_types": { 00:56:00.300 "read": true, 00:56:00.300 "write": true, 00:56:00.300 "unmap": true, 00:56:00.300 "flush": true, 00:56:00.300 "reset": false, 00:56:00.300 "nvme_admin": false, 00:56:00.300 "nvme_io": false, 00:56:00.300 "nvme_io_md": false, 00:56:00.300 "write_zeroes": true, 00:56:00.300 "zcopy": false, 00:56:00.300 "get_zone_info": false, 00:56:00.301 "zone_management": false, 00:56:00.301 "zone_append": false, 00:56:00.301 "compare": false, 00:56:00.301 "compare_and_write": false, 00:56:00.301 "abort": false, 00:56:00.301 "seek_hole": false, 00:56:00.301 "seek_data": false, 00:56:00.301 "copy": false, 00:56:00.301 "nvme_iov_md": false 00:56:00.301 }, 00:56:00.301 "driver_specific": { 00:56:00.301 "ftl": { 00:56:00.301 "base_bdev": "3f86784b-f2fd-46a5-9529-b872b634786c", 00:56:00.301 "cache": "nvc0n1p0" 00:56:00.301 } 00:56:00.301 } 00:56:00.301 } 00:56:00.301 ] 00:56:00.301 10:07:07 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:56:00.301 10:07:07 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:56:00.301 10:07:07 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:56:00.559 10:07:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:56:00.559 10:07:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:56:00.818 [2024-12-09 10:07:07.783234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.783331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:56:00.818 [2024-12-09 10:07:07.783362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:56:00.818 [2024-12-09 10:07:07.783399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.783471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:56:00.818 [2024-12-09 10:07:07.787280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.787322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:56:00.818 [2024-12-09 10:07:07.787344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:56:00.818 [2024-12-09 10:07:07.787360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.787874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.787912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:56:00.818 [2024-12-09 10:07:07.787934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:56:00.818 [2024-12-09 10:07:07.787948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.791280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.791315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:56:00.818 [2024-12-09 10:07:07.791338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:56:00.818 [2024-12-09 10:07:07.791352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.798033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.798083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:56:00.818 [2024-12-09 10:07:07.798118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:56:00.818 [2024-12-09 10:07:07.798144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.830801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.830853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:56:00.818 [2024-12-09 10:07:07.830898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.495 ms 00:56:00.818 [2024-12-09 10:07:07.830913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.850191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.850245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:56:00.818 [2024-12-09 10:07:07.850287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.211 ms 00:56:00.818 [2024-12-09 10:07:07.850302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:00.818 [2024-12-09 10:07:07.850619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:00.818 [2024-12-09 10:07:07.850643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:56:00.818 [2024-12-09 10:07:07.850662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:56:00.818 [2024-12-09 10:07:07.850677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.078 [2024-12-09 10:07:07.882061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.078 [2024-12-09 10:07:07.882126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:56:01.078 [2024-12-09 10:07:07.882166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.341 ms 00:56:01.078 [2024-12-09 10:07:07.882182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.078 [2024-12-09 10:07:07.913022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.078 [2024-12-09 10:07:07.913069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:56:01.078 [2024-12-09 10:07:07.913109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.774 ms 00:56:01.078 [2024-12-09 10:07:07.913123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.078 [2024-12-09 10:07:07.943669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.078 [2024-12-09 10:07:07.943716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:56:01.078 [2024-12-09 10:07:07.943739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:56:01.078 [2024-12-09 10:07:07.943754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.078 [2024-12-09 10:07:07.974044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.078 [2024-12-09 10:07:07.974101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:56:01.078 [2024-12-09 10:07:07.974125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.129 ms 00:56:01.078 [2024-12-09 10:07:07.974140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.078 [2024-12-09 10:07:07.974208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:56:01.078 [2024-12-09 10:07:07.974237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:56:01.078 [2024-12-09 10:07:07.974847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.974995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.975946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.976994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:56:01.079 [2024-12-09 10:07:07.977538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:56:01.079 [2024-12-09 10:07:07.977557] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21838814-59a7-4409-b646-9b1c76dd8330 00:56:01.079 [2024-12-09 10:07:07.977572] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:56:01.079 [2024-12-09 10:07:07.977591] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:56:01.079 [2024-12-09 10:07:07.977608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:56:01.079 [2024-12-09 10:07:07.977626] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:56:01.079 [2024-12-09 10:07:07.977639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:56:01.079 [2024-12-09 10:07:07.977656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:56:01.079 [2024-12-09 10:07:07.977670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:56:01.079 [2024-12-09 10:07:07.977686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:56:01.079 [2024-12-09 10:07:07.977699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:56:01.079 [2024-12-09 10:07:07.977716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.079 [2024-12-09 10:07:07.977731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:56:01.079 [2024-12-09 10:07:07.977750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:56:01.079 [2024-12-09 10:07:07.977764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.079 [2024-12-09 10:07:07.995099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.079 [2024-12-09 10:07:07.995149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:56:01.079 [2024-12-09 10:07:07.995173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.204 ms 00:56:01.079 [2024-12-09 10:07:07.995188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.079 [2024-12-09 10:07:07.995716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:01.079 [2024-12-09 10:07:07.995871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:56:01.079 [2024-12-09 10:07:07.995907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:56:01.079 [2024-12-09 10:07:07.995923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.080 [2024-12-09 10:07:08.056742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.080 [2024-12-09 10:07:08.056815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:56:01.080 [2024-12-09 10:07:08.056841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.080 [2024-12-09 10:07:08.056857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.080 [2024-12-09 10:07:08.056954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.080 [2024-12-09 10:07:08.056972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:56:01.080 [2024-12-09 10:07:08.056990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.080 [2024-12-09 10:07:08.057004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.080 [2024-12-09 10:07:08.057161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.080 [2024-12-09 10:07:08.057184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:56:01.080 [2024-12-09 10:07:08.057203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.080 [2024-12-09 10:07:08.057217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.080 [2024-12-09 10:07:08.057288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.080 [2024-12-09 10:07:08.057308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:56:01.080 [2024-12-09 10:07:08.057326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.080 [2024-12-09 10:07:08.057340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.338 [2024-12-09 10:07:08.173605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.338 [2024-12-09 10:07:08.173885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:56:01.338 [2024-12-09 10:07:08.173925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.338 [2024-12-09 10:07:08.173942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.338 [2024-12-09 10:07:08.263037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.338 [2024-12-09 10:07:08.263110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:56:01.338 [2024-12-09 10:07:08.263138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.338 [2024-12-09 10:07:08.263152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.338 [2024-12-09 10:07:08.263331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.339 [2024-12-09 10:07:08.263353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:56:01.339 [2024-12-09 10:07:08.263376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.339 [2024-12-09 10:07:08.263390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.339 [2024-12-09 10:07:08.263502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.339 [2024-12-09 10:07:08.263522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:56:01.339 [2024-12-09 10:07:08.263544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.339 [2024-12-09 10:07:08.263559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.339 [2024-12-09 10:07:08.263719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.339 [2024-12-09 10:07:08.263746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:56:01.339 [2024-12-09 10:07:08.263778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.339 [2024-12-09 10:07:08.263793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.339 [2024-12-09 10:07:08.263890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.339 [2024-12-09 10:07:08.263911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:56:01.339 [2024-12-09 10:07:08.263930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.339 [2024-12-09 10:07:08.263945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.339 [2024-12-09 10:07:08.264010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.339 [2024-12-09 10:07:08.264028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:56:01.339 [2024-12-09 10:07:08.264045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.339 [2024-12-09 10:07:08.264062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.339 [2024-12-09 10:07:08.264138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:01.339 [2024-12-09 10:07:08.264157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:56:01.339 [2024-12-09 10:07:08.264173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:01.339 [2024-12-09 10:07:08.264187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:01.339 [2024-12-09 10:07:08.264416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 481.148 ms, result 0 00:56:01.339 true 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77273 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77273 ']' 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77273 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77273 00:56:01.339 killing process with pid 77273 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77273' 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77273 00:56:01.339 10:07:08 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77273 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:56:06.607 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:56:06.608 10:07:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:56:06.608 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:56:06.608 fio-3.35 00:56:06.608 Starting 1 thread 00:56:11.886 00:56:11.886 test: (groupid=0, jobs=1): err= 0: pid=77497: Mon Dec 9 10:07:18 2024 00:56:11.886 read: IOPS=972, BW=64.6MiB/s (67.7MB/s)(255MiB/3940msec) 00:56:11.886 slat (nsec): min=5715, max=47282, avg=7724.94, stdev=3481.50 00:56:11.886 clat (usec): min=320, max=812, avg=454.82, stdev=54.03 00:56:11.886 lat (usec): min=328, max=818, avg=462.55, stdev=54.67 00:56:11.886 clat percentiles (usec): 00:56:11.886 | 1.00th=[ 359], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 404], 00:56:11.886 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 457], 00:56:11.886 | 70.00th=[ 465], 80.00th=[ 510], 90.00th=[ 529], 95.00th=[ 545], 00:56:11.886 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 693], 00:56:11.886 | 99.99th=[ 816] 00:56:11.886 write: IOPS=979, BW=65.0MiB/s (68.2MB/s)(256MiB/3937msec); 0 zone resets 00:56:11.886 slat (nsec): min=19715, max=97381, avg=25153.58, stdev=5890.70 00:56:11.886 clat (usec): min=367, max=950, avg=522.64, stdev=63.78 00:56:11.886 lat (usec): min=400, max=977, avg=547.80, stdev=64.04 00:56:11.886 clat percentiles (usec): 00:56:11.886 | 1.00th=[ 404], 5.00th=[ 445], 10.00th=[ 465], 20.00th=[ 474], 00:56:11.886 | 30.00th=[ 482], 40.00th=[ 494], 50.00th=[ 529], 60.00th=[ 537], 00:56:11.886 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 627], 00:56:11.886 | 99.00th=[ 766], 99.50th=[ 824], 99.90th=[ 898], 99.95th=[ 938], 00:56:11.886 | 99.99th=[ 955] 00:56:11.886 bw ( KiB/s): min=63512, max=69224, per=99.94%, avg=66562.29, stdev=2081.67, samples=7 00:56:11.886 iops : min= 934, max= 1018, avg=978.86, stdev=30.61, samples=7 00:56:11.886 lat (usec) : 500=60.63%, 750=38.70%, 1000=0.66% 00:56:11.886 cpu : usr=99.16%, sys=0.03%, ctx=26, majf=0, minf=1169 00:56:11.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:11.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:11.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:11.886 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:11.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:11.886 00:56:11.886 Run status group 0 (all jobs): 00:56:11.886 READ: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=255MiB (267MB), run=3940-3940msec 00:56:11.886 WRITE: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=256MiB (269MB), run=3937-3937msec 00:56:13.788 ----------------------------------------------------- 00:56:13.788 Suppressions used: 00:56:13.788 count bytes template 00:56:13.788 1 5 /usr/src/fio/parse.c 00:56:13.788 1 8 libtcmalloc_minimal.so 00:56:13.788 1 904 libcrypto.so 00:56:13.788 ----------------------------------------------------- 00:56:13.788 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:56:13.788 10:07:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:56:14.048 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:56:14.048 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:56:14.048 fio-3.35 00:56:14.048 Starting 2 threads 00:56:46.114 00:56:46.114 first_half: (groupid=0, jobs=1): err= 0: pid=77602: Mon Dec 9 10:07:50 2024 00:56:46.114 read: IOPS=2312, BW=9249KiB/s (9471kB/s)(256MiB/28314msec) 00:56:46.114 slat (nsec): min=4670, max=37564, avg=7593.78, stdev=1976.79 00:56:46.114 clat (usec): min=779, max=310670, avg=46511.76, stdev=28854.06 00:56:46.114 lat (usec): min=786, max=310678, avg=46519.35, stdev=28854.31 00:56:46.114 clat percentiles (msec): 00:56:46.114 | 1.00th=[ 12], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:56:46.114 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 40], 00:56:46.114 | 70.00th=[ 42], 80.00th=[ 46], 90.00th=[ 52], 95.00th=[ 89], 00:56:46.114 | 99.00th=[ 201], 99.50th=[ 211], 99.90th=[ 241], 99.95th=[ 271], 00:56:46.114 | 99.99th=[ 300] 00:56:46.114 write: IOPS=2318, BW=9274KiB/s (9497kB/s)(256MiB/28266msec); 0 zone resets 00:56:46.114 slat (usec): min=5, max=334, avg= 8.79, stdev= 4.70 00:56:46.114 clat (usec): min=463, max=57560, avg=8803.56, stdev=8955.00 00:56:46.114 lat (usec): min=479, max=57568, avg=8812.35, stdev=8955.12 00:56:46.114 clat percentiles (usec): 00:56:46.114 | 1.00th=[ 1106], 5.00th=[ 1500], 10.00th=[ 1778], 20.00th=[ 3294], 00:56:46.114 | 30.00th=[ 4490], 40.00th=[ 5735], 50.00th=[ 6652], 60.00th=[ 7570], 00:56:46.114 | 70.00th=[ 8586], 80.00th=[10290], 90.00th=[15795], 95.00th=[34341], 00:56:46.114 | 99.00th=[43779], 99.50th=[48497], 99.90th=[53740], 99.95th=[55837], 00:56:46.114 | 99.99th=[56886] 00:56:46.114 bw ( KiB/s): min= 1064, max=50392, per=100.00%, avg=20034.12, stdev=15250.34, samples=26 00:56:46.114 iops : min= 266, max=12598, avg=5008.50, stdev=3812.57, samples=26 00:56:46.114 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.21% 00:56:46.114 lat (msec) : 2=6.42%, 4=6.17%, 10=26.89%, 20=8.17%, 50=46.50% 00:56:46.114 lat (msec) : 100=3.32%, 250=2.24%, 500=0.04% 00:56:46.114 cpu : usr=99.13%, sys=0.19%, ctx=38, majf=0, minf=5554 00:56:46.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:56:46.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:46.114 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:46.114 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:46.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:46.114 second_half: (groupid=0, jobs=1): err= 0: pid=77603: Mon Dec 9 10:07:50 2024 00:56:46.114 read: IOPS=2335, BW=9342KiB/s (9566kB/s)(256MiB/28042msec) 00:56:46.114 slat (nsec): min=4659, max=79037, avg=7632.72, stdev=2033.66 00:56:46.114 clat (msec): min=11, max=270, avg=47.12, stdev=26.45 00:56:46.114 lat (msec): min=11, max=270, avg=47.13, stdev=26.45 00:56:46.114 clat percentiles (msec): 00:56:46.114 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:56:46.114 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 40], 00:56:46.114 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 54], 95.00th=[ 86], 00:56:46.114 | 99.00th=[ 190], 99.50th=[ 213], 99.90th=[ 247], 99.95th=[ 251], 00:56:46.114 | 99.99th=[ 266] 00:56:46.114 write: IOPS=2349, BW=9399KiB/s (9624kB/s)(256MiB/27892msec); 0 zone resets 00:56:46.114 slat (usec): min=5, max=456, avg= 8.89, stdev= 5.98 00:56:46.114 clat (usec): min=441, max=53086, avg=7654.24, stdev=5637.34 00:56:46.114 lat (usec): min=455, max=53096, avg=7663.13, stdev=5637.77 00:56:46.114 clat percentiles (usec): 00:56:46.114 | 1.00th=[ 1287], 5.00th=[ 2008], 10.00th=[ 2900], 20.00th=[ 4146], 00:56:46.114 | 30.00th=[ 5145], 40.00th=[ 5932], 50.00th=[ 6587], 60.00th=[ 7308], 00:56:46.114 | 70.00th=[ 7832], 80.00th=[ 9241], 90.00th=[13829], 95.00th=[15926], 00:56:46.114 | 99.00th=[36963], 99.50th=[41681], 99.90th=[49546], 99.95th=[50070], 00:56:46.114 | 99.99th=[52691] 00:56:46.114 bw ( KiB/s): min= 840, max=46800, per=100.00%, avg=21845.33, stdev=15267.92, samples=24 00:56:46.114 iops : min= 210, max=11700, avg=5461.33, stdev=3816.98, samples=24 00:56:46.114 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.15% 00:56:46.114 lat (msec) : 2=2.27%, 4=6.54%, 10=32.26%, 20=7.53%, 50=45.20% 00:56:46.114 lat (msec) : 100=3.88%, 250=2.09%, 500=0.03% 00:56:46.114 cpu : usr=99.05%, sys=0.21%, ctx=186, majf=0, minf=5559 00:56:46.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:56:46.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:46.114 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:46.114 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:46.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:46.114 00:56:46.114 Run status group 0 (all jobs): 00:56:46.114 READ: bw=18.1MiB/s (18.9MB/s), 9249KiB/s-9342KiB/s (9471kB/s-9566kB/s), io=512MiB (536MB), run=28042-28314msec 00:56:46.114 WRITE: bw=18.1MiB/s (19.0MB/s), 9274KiB/s-9399KiB/s (9497kB/s-9624kB/s), io=512MiB (537MB), run=27892-28266msec 00:56:46.114 ----------------------------------------------------- 00:56:46.114 Suppressions used: 00:56:46.114 count bytes template 00:56:46.114 2 10 /usr/src/fio/parse.c 00:56:46.114 3 288 /usr/src/fio/iolog.c 00:56:46.114 1 8 libtcmalloc_minimal.so 00:56:46.114 1 904 libcrypto.so 00:56:46.114 ----------------------------------------------------- 00:56:46.114 00:56:46.114 10:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:56:46.114 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:46.114 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:56:46.373 10:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:56:46.632 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:56:46.632 fio-3.35 00:56:46.632 Starting 1 thread 00:57:04.714 00:57:04.714 test: (groupid=0, jobs=1): err= 0: pid=77961: Mon Dec 9 10:08:10 2024 00:57:04.714 read: IOPS=6516, BW=25.5MiB/s (26.7MB/s)(255MiB/10006msec) 00:57:04.714 slat (nsec): min=4900, max=33830, avg=6951.15, stdev=1738.83 00:57:04.714 clat (usec): min=849, max=38275, avg=19631.57, stdev=1337.38 00:57:04.714 lat (usec): min=854, max=38280, avg=19638.52, stdev=1337.44 00:57:04.714 clat percentiles (usec): 00:57:04.714 | 1.00th=[18482], 5.00th=[18744], 10.00th=[19006], 20.00th=[19006], 00:57:04.714 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:57:04.714 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20055], 95.00th=[21103], 00:57:04.714 | 99.00th=[26608], 99.50th=[26608], 99.90th=[28705], 99.95th=[33817], 00:57:04.714 | 99.99th=[37487] 00:57:04.714 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(256MiB/5704msec); 0 zone resets 00:57:04.714 slat (usec): min=5, max=509, avg= 9.66, stdev= 5.37 00:57:04.714 clat (usec): min=637, max=72977, avg=11082.68, stdev=13935.37 00:57:04.714 lat (usec): min=645, max=72986, avg=11092.34, stdev=13935.43 00:57:04.714 clat percentiles (usec): 00:57:04.714 | 1.00th=[ 938], 5.00th=[ 1139], 10.00th=[ 1270], 20.00th=[ 1467], 00:57:04.714 | 30.00th=[ 1680], 40.00th=[ 2180], 50.00th=[ 7242], 60.00th=[ 8356], 00:57:04.714 | 70.00th=[ 9896], 80.00th=[11863], 90.00th=[39584], 95.00th=[44303], 00:57:04.714 | 99.00th=[49546], 99.50th=[51119], 99.90th=[55837], 99.95th=[60556], 00:57:04.714 | 99.99th=[70779] 00:57:04.714 bw ( KiB/s): min=13704, max=67224, per=95.07%, avg=43690.67, stdev=13262.54, samples=12 00:57:04.714 iops : min= 3426, max=16806, avg=10922.67, stdev=3315.64, samples=12 00:57:04.714 lat (usec) : 750=0.02%, 1000=0.84% 00:57:04.714 lat (msec) : 2=18.32%, 4=1.82%, 10=14.52%, 20=51.39%, 50=12.71% 00:57:04.714 lat (msec) : 100=0.39% 00:57:04.714 cpu : usr=98.82%, sys=0.38%, ctx=23, majf=0, minf=5565 00:57:04.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:57:04.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:04.714 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:57:04.714 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:04.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:57:04.714 00:57:04.714 Run status group 0 (all jobs): 00:57:04.714 READ: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=255MiB (267MB), run=10006-10006msec 00:57:04.714 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=256MiB (268MB), run=5704-5704msec 00:57:05.649 ----------------------------------------------------- 00:57:05.649 Suppressions used: 00:57:05.649 count bytes template 00:57:05.649 1 5 /usr/src/fio/parse.c 00:57:05.649 2 192 /usr/src/fio/iolog.c 00:57:05.649 1 8 libtcmalloc_minimal.so 00:57:05.649 1 904 libcrypto.so 00:57:05.649 ----------------------------------------------------- 00:57:05.649 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:57:05.649 Remove shared memory files 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58196 /dev/shm/spdk_tgt_trace.pid76183 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:57:05.649 ************************************ 00:57:05.649 END TEST ftl_fio_basic 00:57:05.649 ************************************ 00:57:05.649 00:57:05.649 real 1m15.269s 00:57:05.649 user 2m45.836s 00:57:05.649 sys 0m4.335s 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:05.649 10:08:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:57:05.649 10:08:12 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:57:05.649 10:08:12 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:05.649 10:08:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:05.649 10:08:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:57:05.649 ************************************ 00:57:05.649 START TEST ftl_bdevperf 00:57:05.649 ************************************ 00:57:05.649 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:57:05.649 * Looking for test storage... 00:57:05.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:57:05.649 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:05.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:05.911 --rc genhtml_branch_coverage=1 00:57:05.911 --rc genhtml_function_coverage=1 00:57:05.911 --rc genhtml_legend=1 00:57:05.911 --rc geninfo_all_blocks=1 00:57:05.911 --rc geninfo_unexecuted_blocks=1 00:57:05.911 00:57:05.911 ' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:05.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:05.911 --rc genhtml_branch_coverage=1 00:57:05.911 --rc genhtml_function_coverage=1 00:57:05.911 --rc genhtml_legend=1 00:57:05.911 --rc geninfo_all_blocks=1 00:57:05.911 --rc geninfo_unexecuted_blocks=1 00:57:05.911 00:57:05.911 ' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:05.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:05.911 --rc genhtml_branch_coverage=1 00:57:05.911 --rc genhtml_function_coverage=1 00:57:05.911 --rc genhtml_legend=1 00:57:05.911 --rc geninfo_all_blocks=1 00:57:05.911 --rc geninfo_unexecuted_blocks=1 00:57:05.911 00:57:05.911 ' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:05.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:05.911 --rc genhtml_branch_coverage=1 00:57:05.911 --rc genhtml_function_coverage=1 00:57:05.911 --rc genhtml_legend=1 00:57:05.911 --rc geninfo_all_blocks=1 00:57:05.911 --rc geninfo_unexecuted_blocks=1 00:57:05.911 00:57:05.911 ' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:57:05.911 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78217 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78217 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78217 ']' 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:05.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:05.912 10:08:12 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:57:05.912 [2024-12-09 10:08:12.933974] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:05.912 [2024-12-09 10:08:12.934411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78217 ] 00:57:06.172 [2024-12-09 10:08:13.126705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:06.429 [2024-12-09 10:08:13.284125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:57:06.994 10:08:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:57:07.251 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:07.816 { 00:57:07.816 "name": "nvme0n1", 00:57:07.816 "aliases": [ 00:57:07.816 "3dd787f6-6fca-4602-af0d-076ef78a3755" 00:57:07.816 ], 00:57:07.816 "product_name": "NVMe disk", 00:57:07.816 "block_size": 4096, 00:57:07.816 "num_blocks": 1310720, 00:57:07.816 "uuid": "3dd787f6-6fca-4602-af0d-076ef78a3755", 00:57:07.816 "numa_id": -1, 00:57:07.816 "assigned_rate_limits": { 00:57:07.816 "rw_ios_per_sec": 0, 00:57:07.816 "rw_mbytes_per_sec": 0, 00:57:07.816 "r_mbytes_per_sec": 0, 00:57:07.816 "w_mbytes_per_sec": 0 00:57:07.816 }, 00:57:07.816 "claimed": true, 00:57:07.816 "claim_type": "read_many_write_one", 00:57:07.816 "zoned": false, 00:57:07.816 "supported_io_types": { 00:57:07.816 "read": true, 00:57:07.816 "write": true, 00:57:07.816 "unmap": true, 00:57:07.816 "flush": true, 00:57:07.816 "reset": true, 00:57:07.816 "nvme_admin": true, 00:57:07.816 "nvme_io": true, 00:57:07.816 "nvme_io_md": false, 00:57:07.816 "write_zeroes": true, 00:57:07.816 "zcopy": false, 00:57:07.816 "get_zone_info": false, 00:57:07.816 "zone_management": false, 00:57:07.816 "zone_append": false, 00:57:07.816 "compare": true, 00:57:07.816 "compare_and_write": false, 00:57:07.816 "abort": true, 00:57:07.816 "seek_hole": false, 00:57:07.816 "seek_data": false, 00:57:07.816 "copy": true, 00:57:07.816 "nvme_iov_md": false 00:57:07.816 }, 00:57:07.816 "driver_specific": { 00:57:07.816 "nvme": [ 00:57:07.816 { 00:57:07.816 "pci_address": "0000:00:11.0", 00:57:07.816 "trid": { 00:57:07.816 "trtype": "PCIe", 00:57:07.816 "traddr": "0000:00:11.0" 00:57:07.816 }, 00:57:07.816 "ctrlr_data": { 00:57:07.816 "cntlid": 0, 00:57:07.816 "vendor_id": "0x1b36", 00:57:07.816 "model_number": "QEMU NVMe Ctrl", 00:57:07.816 "serial_number": "12341", 00:57:07.816 "firmware_revision": "8.0.0", 00:57:07.816 "subnqn": "nqn.2019-08.org.qemu:12341", 00:57:07.816 "oacs": { 00:57:07.816 "security": 0, 00:57:07.816 "format": 1, 00:57:07.816 "firmware": 0, 00:57:07.816 "ns_manage": 1 00:57:07.816 }, 00:57:07.816 "multi_ctrlr": false, 00:57:07.816 "ana_reporting": false 00:57:07.816 }, 00:57:07.816 "vs": { 00:57:07.816 "nvme_version": "1.4" 00:57:07.816 }, 00:57:07.816 "ns_data": { 00:57:07.816 "id": 1, 00:57:07.816 "can_share": false 00:57:07.816 } 00:57:07.816 } 00:57:07.816 ], 00:57:07.816 "mp_policy": "active_passive" 00:57:07.816 } 00:57:07.816 } 00:57:07.816 ]' 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:57:07.816 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:57:08.088 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ddbf3344-4afc-472c-a9e3-23651aa1de0e 00:57:08.088 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:57:08.088 10:08:14 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ddbf3344-4afc-472c-a9e3-23651aa1de0e 00:57:08.345 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:57:08.603 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c12fc8e8-ab9b-4f24-a352-0a61f2282c72 00:57:08.603 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c12fc8e8-ab9b-4f24-a352-0a61f2282c72 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:57:08.861 10:08:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:09.119 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:09.119 { 00:57:09.119 "name": "6d2dc4d5-7db0-40e9-9375-ce8fe10699d0", 00:57:09.119 "aliases": [ 00:57:09.119 "lvs/nvme0n1p0" 00:57:09.119 ], 00:57:09.119 "product_name": "Logical Volume", 00:57:09.119 "block_size": 4096, 00:57:09.119 "num_blocks": 26476544, 00:57:09.119 "uuid": "6d2dc4d5-7db0-40e9-9375-ce8fe10699d0", 00:57:09.119 "assigned_rate_limits": { 00:57:09.119 "rw_ios_per_sec": 0, 00:57:09.119 "rw_mbytes_per_sec": 0, 00:57:09.119 "r_mbytes_per_sec": 0, 00:57:09.119 "w_mbytes_per_sec": 0 00:57:09.119 }, 00:57:09.119 "claimed": false, 00:57:09.119 "zoned": false, 00:57:09.119 "supported_io_types": { 00:57:09.119 "read": true, 00:57:09.119 "write": true, 00:57:09.119 "unmap": true, 00:57:09.119 "flush": false, 00:57:09.119 "reset": true, 00:57:09.119 "nvme_admin": false, 00:57:09.119 "nvme_io": false, 00:57:09.119 "nvme_io_md": false, 00:57:09.119 "write_zeroes": true, 00:57:09.119 "zcopy": false, 00:57:09.119 "get_zone_info": false, 00:57:09.119 "zone_management": false, 00:57:09.119 "zone_append": false, 00:57:09.119 "compare": false, 00:57:09.119 "compare_and_write": false, 00:57:09.119 "abort": false, 00:57:09.119 "seek_hole": true, 00:57:09.119 "seek_data": true, 00:57:09.119 "copy": false, 00:57:09.119 "nvme_iov_md": false 00:57:09.119 }, 00:57:09.119 "driver_specific": { 00:57:09.119 "lvol": { 00:57:09.119 "lvol_store_uuid": "c12fc8e8-ab9b-4f24-a352-0a61f2282c72", 00:57:09.119 "base_bdev": "nvme0n1", 00:57:09.119 "thin_provision": true, 00:57:09.119 "num_allocated_clusters": 0, 00:57:09.119 "snapshot": false, 00:57:09.119 "clone": false, 00:57:09.119 "esnap_clone": false 00:57:09.119 } 00:57:09.119 } 00:57:09.119 } 00:57:09.119 ]' 00:57:09.119 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:57:09.378 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:57:09.636 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:09.895 { 00:57:09.895 "name": "6d2dc4d5-7db0-40e9-9375-ce8fe10699d0", 00:57:09.895 "aliases": [ 00:57:09.895 "lvs/nvme0n1p0" 00:57:09.895 ], 00:57:09.895 "product_name": "Logical Volume", 00:57:09.895 "block_size": 4096, 00:57:09.895 "num_blocks": 26476544, 00:57:09.895 "uuid": "6d2dc4d5-7db0-40e9-9375-ce8fe10699d0", 00:57:09.895 "assigned_rate_limits": { 00:57:09.895 "rw_ios_per_sec": 0, 00:57:09.895 "rw_mbytes_per_sec": 0, 00:57:09.895 "r_mbytes_per_sec": 0, 00:57:09.895 "w_mbytes_per_sec": 0 00:57:09.895 }, 00:57:09.895 "claimed": false, 00:57:09.895 "zoned": false, 00:57:09.895 "supported_io_types": { 00:57:09.895 "read": true, 00:57:09.895 "write": true, 00:57:09.895 "unmap": true, 00:57:09.895 "flush": false, 00:57:09.895 "reset": true, 00:57:09.895 "nvme_admin": false, 00:57:09.895 "nvme_io": false, 00:57:09.895 "nvme_io_md": false, 00:57:09.895 "write_zeroes": true, 00:57:09.895 "zcopy": false, 00:57:09.895 "get_zone_info": false, 00:57:09.895 "zone_management": false, 00:57:09.895 "zone_append": false, 00:57:09.895 "compare": false, 00:57:09.895 "compare_and_write": false, 00:57:09.895 "abort": false, 00:57:09.895 "seek_hole": true, 00:57:09.895 "seek_data": true, 00:57:09.895 "copy": false, 00:57:09.895 "nvme_iov_md": false 00:57:09.895 }, 00:57:09.895 "driver_specific": { 00:57:09.895 "lvol": { 00:57:09.895 "lvol_store_uuid": "c12fc8e8-ab9b-4f24-a352-0a61f2282c72", 00:57:09.895 "base_bdev": "nvme0n1", 00:57:09.895 "thin_provision": true, 00:57:09.895 "num_allocated_clusters": 0, 00:57:09.895 "snapshot": false, 00:57:09.895 "clone": false, 00:57:09.895 "esnap_clone": false 00:57:09.895 } 00:57:09.895 } 00:57:09.895 } 00:57:09.895 ]' 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:57:09.895 10:08:16 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:57:10.153 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 00:57:10.412 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:10.412 { 00:57:10.412 "name": "6d2dc4d5-7db0-40e9-9375-ce8fe10699d0", 00:57:10.412 "aliases": [ 00:57:10.412 "lvs/nvme0n1p0" 00:57:10.412 ], 00:57:10.412 "product_name": "Logical Volume", 00:57:10.412 "block_size": 4096, 00:57:10.412 "num_blocks": 26476544, 00:57:10.412 "uuid": "6d2dc4d5-7db0-40e9-9375-ce8fe10699d0", 00:57:10.412 "assigned_rate_limits": { 00:57:10.412 "rw_ios_per_sec": 0, 00:57:10.412 "rw_mbytes_per_sec": 0, 00:57:10.412 "r_mbytes_per_sec": 0, 00:57:10.412 "w_mbytes_per_sec": 0 00:57:10.412 }, 00:57:10.412 "claimed": false, 00:57:10.412 "zoned": false, 00:57:10.412 "supported_io_types": { 00:57:10.412 "read": true, 00:57:10.412 "write": true, 00:57:10.412 "unmap": true, 00:57:10.412 "flush": false, 00:57:10.412 "reset": true, 00:57:10.412 "nvme_admin": false, 00:57:10.412 "nvme_io": false, 00:57:10.412 "nvme_io_md": false, 00:57:10.412 "write_zeroes": true, 00:57:10.412 "zcopy": false, 00:57:10.412 "get_zone_info": false, 00:57:10.412 "zone_management": false, 00:57:10.412 "zone_append": false, 00:57:10.412 "compare": false, 00:57:10.412 "compare_and_write": false, 00:57:10.412 "abort": false, 00:57:10.412 "seek_hole": true, 00:57:10.412 "seek_data": true, 00:57:10.412 "copy": false, 00:57:10.412 "nvme_iov_md": false 00:57:10.412 }, 00:57:10.412 "driver_specific": { 00:57:10.412 "lvol": { 00:57:10.412 "lvol_store_uuid": "c12fc8e8-ab9b-4f24-a352-0a61f2282c72", 00:57:10.412 "base_bdev": "nvme0n1", 00:57:10.412 "thin_provision": true, 00:57:10.412 "num_allocated_clusters": 0, 00:57:10.412 "snapshot": false, 00:57:10.412 "clone": false, 00:57:10.412 "esnap_clone": false 00:57:10.412 } 00:57:10.412 } 00:57:10.412 } 00:57:10.412 ]' 00:57:10.412 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:57:10.671 10:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6d2dc4d5-7db0-40e9-9375-ce8fe10699d0 -c nvc0n1p0 --l2p_dram_limit 20 00:57:10.931 [2024-12-09 10:08:17.770921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.770983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:57:10.931 [2024-12-09 10:08:17.771022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:57:10.931 [2024-12-09 10:08:17.771037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.771118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.771139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:10.931 [2024-12-09 10:08:17.771153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:57:10.931 [2024-12-09 10:08:17.771167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.771194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:57:10.931 [2024-12-09 10:08:17.772315] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:57:10.931 [2024-12-09 10:08:17.772499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.772528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:10.931 [2024-12-09 10:08:17.772545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:57:10.931 [2024-12-09 10:08:17.772560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.772707] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 01ed17ba-e817-4d22-8210-7aeec50eb433 00:57:10.931 [2024-12-09 10:08:17.774614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.774654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:57:10.931 [2024-12-09 10:08:17.774690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:57:10.931 [2024-12-09 10:08:17.774702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.784533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.784590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:10.931 [2024-12-09 10:08:17.784628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.731 ms 00:57:10.931 [2024-12-09 10:08:17.784645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.784787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.784816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:10.931 [2024-12-09 10:08:17.784838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:57:10.931 [2024-12-09 10:08:17.784851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.784948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.784968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:57:10.931 [2024-12-09 10:08:17.784986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:57:10.931 [2024-12-09 10:08:17.784999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.785039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:57:10.931 [2024-12-09 10:08:17.790390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.790607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:10.931 [2024-12-09 10:08:17.790637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.369 ms 00:57:10.931 [2024-12-09 10:08:17.790661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.790714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.931 [2024-12-09 10:08:17.790734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:57:10.931 [2024-12-09 10:08:17.790748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:57:10.931 [2024-12-09 10:08:17.790763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.931 [2024-12-09 10:08:17.790813] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:57:10.931 [2024-12-09 10:08:17.790991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:57:10.931 [2024-12-09 10:08:17.791011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:57:10.931 [2024-12-09 10:08:17.791030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:57:10.931 [2024-12-09 10:08:17.791046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:57:10.931 [2024-12-09 10:08:17.791063] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:57:10.931 [2024-12-09 10:08:17.791077] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:57:10.931 [2024-12-09 10:08:17.791092] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:57:10.931 [2024-12-09 10:08:17.791104] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:57:10.932 [2024-12-09 10:08:17.791120] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:57:10.932 [2024-12-09 10:08:17.791137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.932 [2024-12-09 10:08:17.791151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:57:10.932 [2024-12-09 10:08:17.791164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:57:10.932 [2024-12-09 10:08:17.791179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.932 [2024-12-09 10:08:17.791298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.932 [2024-12-09 10:08:17.791321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:57:10.932 [2024-12-09 10:08:17.791334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:57:10.932 [2024-12-09 10:08:17.791351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.932 [2024-12-09 10:08:17.791456] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:57:10.932 [2024-12-09 10:08:17.791480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:57:10.932 [2024-12-09 10:08:17.791493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:57:10.932 [2024-12-09 10:08:17.791534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:57:10.932 [2024-12-09 10:08:17.791569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:10.932 [2024-12-09 10:08:17.791593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:57:10.932 [2024-12-09 10:08:17.791623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:57:10.932 [2024-12-09 10:08:17.791634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:10.932 [2024-12-09 10:08:17.791649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:57:10.932 [2024-12-09 10:08:17.791660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:57:10.932 [2024-12-09 10:08:17.791676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:57:10.932 [2024-12-09 10:08:17.791701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:57:10.932 [2024-12-09 10:08:17.791737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:57:10.932 [2024-12-09 10:08:17.791778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:57:10.932 [2024-12-09 10:08:17.791814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:57:10.932 [2024-12-09 10:08:17.791852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:10.932 [2024-12-09 10:08:17.791879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:57:10.932 [2024-12-09 10:08:17.791891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:10.932 [2024-12-09 10:08:17.791915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:57:10.932 [2024-12-09 10:08:17.791929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:57:10.932 [2024-12-09 10:08:17.791940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:10.932 [2024-12-09 10:08:17.791955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:57:10.932 [2024-12-09 10:08:17.791967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:57:10.932 [2024-12-09 10:08:17.791980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.791992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:57:10.932 [2024-12-09 10:08:17.792005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:57:10.932 [2024-12-09 10:08:17.792016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.792029] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:57:10.932 [2024-12-09 10:08:17.792041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:57:10.932 [2024-12-09 10:08:17.792055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:10.932 [2024-12-09 10:08:17.792068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:10.932 [2024-12-09 10:08:17.792085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:57:10.932 [2024-12-09 10:08:17.792097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:57:10.932 [2024-12-09 10:08:17.792110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:57:10.932 [2024-12-09 10:08:17.792121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:57:10.932 [2024-12-09 10:08:17.792135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:57:10.932 [2024-12-09 10:08:17.792146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:57:10.932 [2024-12-09 10:08:17.792163] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:57:10.932 [2024-12-09 10:08:17.792178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:57:10.932 [2024-12-09 10:08:17.792207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:57:10.932 [2024-12-09 10:08:17.792221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:57:10.932 [2024-12-09 10:08:17.792233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:57:10.932 [2024-12-09 10:08:17.792261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:57:10.932 [2024-12-09 10:08:17.792276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:57:10.932 [2024-12-09 10:08:17.792291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:57:10.932 [2024-12-09 10:08:17.792303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:57:10.932 [2024-12-09 10:08:17.792321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:57:10.932 [2024-12-09 10:08:17.792333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:57:10.932 [2024-12-09 10:08:17.792400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:57:10.932 [2024-12-09 10:08:17.792414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:57:10.932 [2024-12-09 10:08:17.792445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:57:10.932 [2024-12-09 10:08:17.792459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:57:10.932 [2024-12-09 10:08:17.792471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:57:10.932 [2024-12-09 10:08:17.792487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:10.932 [2024-12-09 10:08:17.792500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:57:10.932 [2024-12-09 10:08:17.792515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:57:10.932 [2024-12-09 10:08:17.792527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:10.932 [2024-12-09 10:08:17.792582] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:57:10.932 [2024-12-09 10:08:17.792599] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:57:13.464 [2024-12-09 10:08:20.223442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.223524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:57:13.464 [2024-12-09 10:08:20.223552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2430.861 ms 00:57:13.464 [2024-12-09 10:08:20.223567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.262195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.262277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:13.464 [2024-12-09 10:08:20.262304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.345 ms 00:57:13.464 [2024-12-09 10:08:20.262318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.262511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.262533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:57:13.464 [2024-12-09 10:08:20.262554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:57:13.464 [2024-12-09 10:08:20.262567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.316448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.316519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:13.464 [2024-12-09 10:08:20.316562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.790 ms 00:57:13.464 [2024-12-09 10:08:20.316577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.316649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.316666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:13.464 [2024-12-09 10:08:20.316683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:57:13.464 [2024-12-09 10:08:20.316699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.317355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.317381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:13.464 [2024-12-09 10:08:20.317400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:57:13.464 [2024-12-09 10:08:20.317413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.317582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.317607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:13.464 [2024-12-09 10:08:20.317627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:57:13.464 [2024-12-09 10:08:20.317639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.337213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.337283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:13.464 [2024-12-09 10:08:20.337308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.542 ms 00:57:13.464 [2024-12-09 10:08:20.337337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.464 [2024-12-09 10:08:20.352039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:57:13.464 [2024-12-09 10:08:20.359796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.464 [2024-12-09 10:08:20.360016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:57:13.465 [2024-12-09 10:08:20.360049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.325 ms 00:57:13.465 [2024-12-09 10:08:20.360067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.465 [2024-12-09 10:08:20.427225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.465 [2024-12-09 10:08:20.427336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:57:13.465 [2024-12-09 10:08:20.427360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.098 ms 00:57:13.465 [2024-12-09 10:08:20.427376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.465 [2024-12-09 10:08:20.427621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.465 [2024-12-09 10:08:20.427649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:57:13.465 [2024-12-09 10:08:20.427664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:57:13.465 [2024-12-09 10:08:20.427685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.465 [2024-12-09 10:08:20.458833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.465 [2024-12-09 10:08:20.459097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:57:13.465 [2024-12-09 10:08:20.459128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.078 ms 00:57:13.465 [2024-12-09 10:08:20.459146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.465 [2024-12-09 10:08:20.489721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.465 [2024-12-09 10:08:20.489773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:57:13.465 [2024-12-09 10:08:20.489792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.523 ms 00:57:13.465 [2024-12-09 10:08:20.489814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.465 [2024-12-09 10:08:20.490690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.465 [2024-12-09 10:08:20.490728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:57:13.465 [2024-12-09 10:08:20.490745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:57:13.465 [2024-12-09 10:08:20.490760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.576576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.723 [2024-12-09 10:08:20.576656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:57:13.723 [2024-12-09 10:08:20.576678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.748 ms 00:57:13.723 [2024-12-09 10:08:20.576695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.610872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.723 [2024-12-09 10:08:20.611199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:57:13.723 [2024-12-09 10:08:20.611240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.070 ms 00:57:13.723 [2024-12-09 10:08:20.611281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.646616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.723 [2024-12-09 10:08:20.646716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:57:13.723 [2024-12-09 10:08:20.646739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.246 ms 00:57:13.723 [2024-12-09 10:08:20.646755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.678961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.723 [2024-12-09 10:08:20.679327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:57:13.723 [2024-12-09 10:08:20.679359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.120 ms 00:57:13.723 [2024-12-09 10:08:20.679377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.679459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.723 [2024-12-09 10:08:20.679487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:57:13.723 [2024-12-09 10:08:20.679502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:57:13.723 [2024-12-09 10:08:20.679517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.679670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:13.723 [2024-12-09 10:08:20.679695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:57:13.723 [2024-12-09 10:08:20.679710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:57:13.723 [2024-12-09 10:08:20.679725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:13.723 [2024-12-09 10:08:20.681051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2909.614 ms, result 0 00:57:13.723 { 00:57:13.723 "name": "ftl0", 00:57:13.723 "uuid": "01ed17ba-e817-4d22-8210-7aeec50eb433" 00:57:13.723 } 00:57:13.723 10:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:57:13.723 10:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:57:13.723 10:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:57:14.290 10:08:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:57:14.290 [2024-12-09 10:08:21.145558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:57:14.290 I/O size of 69632 is greater than zero copy threshold (65536). 00:57:14.290 Zero copy mechanism will not be used. 00:57:14.290 Running I/O for 4 seconds... 00:57:16.178 1680.00 IOPS, 111.56 MiB/s [2024-12-09T10:08:24.157Z] 1692.50 IOPS, 112.39 MiB/s [2024-12-09T10:08:25.532Z] 1772.33 IOPS, 117.69 MiB/s [2024-12-09T10:08:25.532Z] 1825.25 IOPS, 121.21 MiB/s 00:57:18.488 Latency(us) 00:57:18.488 [2024-12-09T10:08:25.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:18.488 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:57:18.488 ftl0 : 4.00 1824.57 121.16 0.00 0.00 570.86 234.59 2740.60 00:57:18.488 [2024-12-09T10:08:25.532Z] =================================================================================================================== 00:57:18.488 [2024-12-09T10:08:25.532Z] Total : 1824.57 121.16 0.00 0.00 570.86 234.59 2740.60 00:57:18.488 [2024-12-09 10:08:25.158836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:57:18.488 { 00:57:18.488 "results": [ 00:57:18.488 { 00:57:18.488 "job": "ftl0", 00:57:18.488 "core_mask": "0x1", 00:57:18.488 "workload": "randwrite", 00:57:18.488 "status": "finished", 00:57:18.488 "queue_depth": 1, 00:57:18.488 "io_size": 69632, 00:57:18.488 "runtime": 4.00204, 00:57:18.488 "iops": 1824.569469570519, 00:57:18.488 "mibps": 121.16281633866728, 00:57:18.488 "io_failed": 0, 00:57:18.488 "io_timeout": 0, 00:57:18.488 "avg_latency_us": 570.8590921540798, 00:57:18.488 "min_latency_us": 234.5890909090909, 00:57:18.488 "max_latency_us": 2740.5963636363635 00:57:18.488 } 00:57:18.488 ], 00:57:18.488 "core_count": 1 00:57:18.488 } 00:57:18.488 10:08:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:57:18.488 [2024-12-09 10:08:25.330357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:57:18.488 Running I/O for 4 seconds... 00:57:20.358 6988.00 IOPS, 27.30 MiB/s [2024-12-09T10:08:28.781Z] 7224.50 IOPS, 28.22 MiB/s [2024-12-09T10:08:29.349Z] 7311.33 IOPS, 28.56 MiB/s [2024-12-09T10:08:29.607Z] 7229.75 IOPS, 28.24 MiB/s 00:57:22.563 Latency(us) 00:57:22.563 [2024-12-09T10:08:29.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:22.563 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:57:22.563 ftl0 : 4.02 7224.77 28.22 0.00 0.00 17670.64 314.65 36938.47 00:57:22.563 [2024-12-09T10:08:29.607Z] =================================================================================================================== 00:57:22.563 [2024-12-09T10:08:29.607Z] Total : 7224.77 28.22 0.00 0.00 17670.64 0.00 36938.47 00:57:22.563 { 00:57:22.563 "results": [ 00:57:22.563 { 00:57:22.563 "job": "ftl0", 00:57:22.563 "core_mask": "0x1", 00:57:22.563 "workload": "randwrite", 00:57:22.563 "status": "finished", 00:57:22.563 "queue_depth": 128, 00:57:22.563 "io_size": 4096, 00:57:22.563 "runtime": 4.020195, 00:57:22.563 "iops": 7224.773922657981, 00:57:22.563 "mibps": 28.221773135382737, 00:57:22.563 "io_failed": 0, 00:57:22.563 "io_timeout": 0, 00:57:22.563 "avg_latency_us": 17670.644501854487, 00:57:22.563 "min_latency_us": 314.6472727272727, 00:57:22.563 "max_latency_us": 36938.472727272725 00:57:22.563 } 00:57:22.563 ], 00:57:22.563 "core_count": 1 00:57:22.563 } 00:57:22.563 [2024-12-09 10:08:29.362555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:57:22.563 10:08:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:57:22.563 [2024-12-09 10:08:29.509529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:57:22.563 Running I/O for 4 seconds... 00:57:24.877 5737.00 IOPS, 22.41 MiB/s [2024-12-09T10:08:32.858Z] 5555.00 IOPS, 21.70 MiB/s [2024-12-09T10:08:33.811Z] 5495.67 IOPS, 21.47 MiB/s [2024-12-09T10:08:33.811Z] 5577.75 IOPS, 21.79 MiB/s 00:57:26.767 Latency(us) 00:57:26.767 [2024-12-09T10:08:33.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:26.767 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:57:26.767 Verification LBA range: start 0x0 length 0x1400000 00:57:26.767 ftl0 : 4.01 5590.16 21.84 0.00 0.00 22816.89 370.50 35270.28 00:57:26.767 [2024-12-09T10:08:33.811Z] =================================================================================================================== 00:57:26.767 [2024-12-09T10:08:33.811Z] Total : 5590.16 21.84 0.00 0.00 22816.89 0.00 35270.28 00:57:26.767 { 00:57:26.767 "results": [ 00:57:26.767 { 00:57:26.767 "job": "ftl0", 00:57:26.767 "core_mask": "0x1", 00:57:26.767 "workload": "verify", 00:57:26.767 "status": "finished", 00:57:26.767 "verify_range": { 00:57:26.767 "start": 0, 00:57:26.767 "length": 20971520 00:57:26.767 }, 00:57:26.767 "queue_depth": 128, 00:57:26.767 "io_size": 4096, 00:57:26.767 "runtime": 4.013839, 00:57:26.767 "iops": 5590.159445857195, 00:57:26.767 "mibps": 21.83656033537967, 00:57:26.767 "io_failed": 0, 00:57:26.767 "io_timeout": 0, 00:57:26.767 "avg_latency_us": 22816.888663873786, 00:57:26.767 "min_latency_us": 370.5018181818182, 00:57:26.767 "max_latency_us": 35270.28363636364 00:57:26.767 } 00:57:26.767 ], 00:57:26.767 "core_count": 1 00:57:26.767 } 00:57:26.767 [2024-12-09 10:08:33.542578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:57:26.767 10:08:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:57:27.026 [2024-12-09 10:08:33.839911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.026 [2024-12-09 10:08:33.840003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:57:27.026 [2024-12-09 10:08:33.840025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:57:27.026 [2024-12-09 10:08:33.840039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.026 [2024-12-09 10:08:33.840071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:57:27.026 [2024-12-09 10:08:33.843704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.026 [2024-12-09 10:08:33.843911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:57:27.026 [2024-12-09 10:08:33.843944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:57:27.026 [2024-12-09 10:08:33.843958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.026 [2024-12-09 10:08:33.845739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.026 [2024-12-09 10:08:33.845807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:57:27.026 [2024-12-09 10:08:33.845874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:57:27.026 [2024-12-09 10:08:33.845887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.026 [2024-12-09 10:08:34.033598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.026 [2024-12-09 10:08:34.033689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:57:27.026 [2024-12-09 10:08:34.033720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 187.667 ms 00:57:27.026 [2024-12-09 10:08:34.033734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.026 [2024-12-09 10:08:34.040786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.026 [2024-12-09 10:08:34.040828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:57:27.026 [2024-12-09 10:08:34.040849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.995 ms 00:57:27.026 [2024-12-09 10:08:34.040879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.287 [2024-12-09 10:08:34.074745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.287 [2024-12-09 10:08:34.074812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:57:27.287 [2024-12-09 10:08:34.074854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.754 ms 00:57:27.287 [2024-12-09 10:08:34.074866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.095778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.288 [2024-12-09 10:08:34.095860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:57:27.288 [2024-12-09 10:08:34.095901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.809 ms 00:57:27.288 [2024-12-09 10:08:34.095917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.096159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.288 [2024-12-09 10:08:34.096181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:57:27.288 [2024-12-09 10:08:34.096200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:57:27.288 [2024-12-09 10:08:34.096212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.126138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.288 [2024-12-09 10:08:34.126211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:57:27.288 [2024-12-09 10:08:34.126264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.898 ms 00:57:27.288 [2024-12-09 10:08:34.126276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.154718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.288 [2024-12-09 10:08:34.154762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:57:27.288 [2024-12-09 10:08:34.154783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.360 ms 00:57:27.288 [2024-12-09 10:08:34.154796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.182718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.288 [2024-12-09 10:08:34.182774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:57:27.288 [2024-12-09 10:08:34.182811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.871 ms 00:57:27.288 [2024-12-09 10:08:34.182822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.210842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.288 [2024-12-09 10:08:34.210881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:57:27.288 [2024-12-09 10:08:34.210918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.914 ms 00:57:27.288 [2024-12-09 10:08:34.210929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.288 [2024-12-09 10:08:34.210974] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:57:27.288 [2024-12-09 10:08:34.210995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.211997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:57:27.288 [2024-12-09 10:08:34.212127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:57:27.289 [2024-12-09 10:08:34.212541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:57:27.289 [2024-12-09 10:08:34.212555] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01ed17ba-e817-4d22-8210-7aeec50eb433 00:57:27.289 [2024-12-09 10:08:34.212579] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:57:27.289 [2024-12-09 10:08:34.212592] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:57:27.289 [2024-12-09 10:08:34.212602] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:57:27.289 [2024-12-09 10:08:34.212616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:57:27.289 [2024-12-09 10:08:34.212637] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:57:27.289 [2024-12-09 10:08:34.212650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:57:27.289 [2024-12-09 10:08:34.212661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:57:27.289 [2024-12-09 10:08:34.212676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:57:27.289 [2024-12-09 10:08:34.212686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:57:27.289 [2024-12-09 10:08:34.212699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.289 [2024-12-09 10:08:34.212710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:57:27.289 [2024-12-09 10:08:34.212725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:57:27.289 [2024-12-09 10:08:34.212736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.289 [2024-12-09 10:08:34.229382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.289 [2024-12-09 10:08:34.229435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:57:27.289 [2024-12-09 10:08:34.229471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.569 ms 00:57:27.289 [2024-12-09 10:08:34.229484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.289 [2024-12-09 10:08:34.229987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:27.289 [2024-12-09 10:08:34.230018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:57:27.289 [2024-12-09 10:08:34.230036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:57:27.289 [2024-12-09 10:08:34.230048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.289 [2024-12-09 10:08:34.274631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.289 [2024-12-09 10:08:34.274709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:27.289 [2024-12-09 10:08:34.274747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.289 [2024-12-09 10:08:34.274759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.289 [2024-12-09 10:08:34.274840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.289 [2024-12-09 10:08:34.274855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:27.289 [2024-12-09 10:08:34.274870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.289 [2024-12-09 10:08:34.274881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.289 [2024-12-09 10:08:34.275005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.289 [2024-12-09 10:08:34.275024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:27.289 [2024-12-09 10:08:34.275038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.289 [2024-12-09 10:08:34.275049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.289 [2024-12-09 10:08:34.275073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.289 [2024-12-09 10:08:34.275086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:27.289 [2024-12-09 10:08:34.275100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.289 [2024-12-09 10:08:34.275110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.373442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.373512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:27.548 [2024-12-09 10:08:34.373554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.373565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.452548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.452613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:27.548 [2024-12-09 10:08:34.452651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.452663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.452809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.452828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:27.548 [2024-12-09 10:08:34.452844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.452855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.452920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.452938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:27.548 [2024-12-09 10:08:34.452961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.452972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.453093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.453115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:27.548 [2024-12-09 10:08:34.453132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.453143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.453200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.453218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:57:27.548 [2024-12-09 10:08:34.453233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.453244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.453353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.453390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:27.548 [2024-12-09 10:08:34.453407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.453447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.453508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:27.548 [2024-12-09 10:08:34.453527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:27.548 [2024-12-09 10:08:34.453543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:27.548 [2024-12-09 10:08:34.453554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:27.548 [2024-12-09 10:08:34.453720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 613.766 ms, result 0 00:57:27.548 true 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78217 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78217 ']' 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78217 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78217 00:57:27.548 killing process with pid 78217 00:57:27.548 Received shutdown signal, test time was about 4.000000 seconds 00:57:27.548 00:57:27.548 Latency(us) 00:57:27.548 [2024-12-09T10:08:34.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:27.548 [2024-12-09T10:08:34.592Z] =================================================================================================================== 00:57:27.548 [2024-12-09T10:08:34.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78217' 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78217 00:57:27.548 10:08:34 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78217 00:57:31.799 Remove shared memory files 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:57:31.799 10:08:38 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:57:31.799 ************************************ 00:57:31.799 END TEST ftl_bdevperf 00:57:31.799 ************************************ 00:57:31.799 00:57:31.799 real 0m25.495s 00:57:31.799 user 0m29.271s 00:57:31.799 sys 0m1.247s 00:57:31.800 10:08:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:31.800 10:08:38 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:57:31.800 10:08:38 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:57:31.800 10:08:38 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:31.800 10:08:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:31.800 10:08:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:57:31.800 ************************************ 00:57:31.800 START TEST ftl_trim 00:57:31.800 ************************************ 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:57:31.800 * Looking for test storage... 00:57:31.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:31.800 10:08:38 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:31.800 --rc genhtml_branch_coverage=1 00:57:31.800 --rc genhtml_function_coverage=1 00:57:31.800 --rc genhtml_legend=1 00:57:31.800 --rc geninfo_all_blocks=1 00:57:31.800 --rc geninfo_unexecuted_blocks=1 00:57:31.800 00:57:31.800 ' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:31.800 --rc genhtml_branch_coverage=1 00:57:31.800 --rc genhtml_function_coverage=1 00:57:31.800 --rc genhtml_legend=1 00:57:31.800 --rc geninfo_all_blocks=1 00:57:31.800 --rc geninfo_unexecuted_blocks=1 00:57:31.800 00:57:31.800 ' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:31.800 --rc genhtml_branch_coverage=1 00:57:31.800 --rc genhtml_function_coverage=1 00:57:31.800 --rc genhtml_legend=1 00:57:31.800 --rc geninfo_all_blocks=1 00:57:31.800 --rc geninfo_unexecuted_blocks=1 00:57:31.800 00:57:31.800 ' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:31.800 --rc genhtml_branch_coverage=1 00:57:31.800 --rc genhtml_function_coverage=1 00:57:31.800 --rc genhtml_legend=1 00:57:31.800 --rc geninfo_all_blocks=1 00:57:31.800 --rc geninfo_unexecuted_blocks=1 00:57:31.800 00:57:31.800 ' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78569 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78569 00:57:31.800 10:08:38 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78569 ']' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:31.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:31.800 10:08:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:57:31.800 [2024-12-09 10:08:38.466412] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:31.800 [2024-12-09 10:08:38.466563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78569 ] 00:57:31.800 [2024-12-09 10:08:38.651397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:57:31.800 [2024-12-09 10:08:38.826202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:31.800 [2024-12-09 10:08:38.826314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:31.800 [2024-12-09 10:08:38.826333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:32.736 10:08:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:32.736 10:08:39 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:57:32.736 10:08:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:57:32.736 10:08:39 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:57:32.736 10:08:39 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:57:32.736 10:08:39 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:57:32.736 10:08:39 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:57:32.736 10:08:39 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:57:33.303 10:08:40 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:57:33.303 10:08:40 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:57:33.303 10:08:40 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:57:33.303 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:57:33.303 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:33.303 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:57:33.303 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:57:33.303 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:33.562 { 00:57:33.562 "name": "nvme0n1", 00:57:33.562 "aliases": [ 00:57:33.562 "7ffeb982-a652-4d94-ad6a-c03ac06007ac" 00:57:33.562 ], 00:57:33.562 "product_name": "NVMe disk", 00:57:33.562 "block_size": 4096, 00:57:33.562 "num_blocks": 1310720, 00:57:33.562 "uuid": "7ffeb982-a652-4d94-ad6a-c03ac06007ac", 00:57:33.562 "numa_id": -1, 00:57:33.562 "assigned_rate_limits": { 00:57:33.562 "rw_ios_per_sec": 0, 00:57:33.562 "rw_mbytes_per_sec": 0, 00:57:33.562 "r_mbytes_per_sec": 0, 00:57:33.562 "w_mbytes_per_sec": 0 00:57:33.562 }, 00:57:33.562 "claimed": true, 00:57:33.562 "claim_type": "read_many_write_one", 00:57:33.562 "zoned": false, 00:57:33.562 "supported_io_types": { 00:57:33.562 "read": true, 00:57:33.562 "write": true, 00:57:33.562 "unmap": true, 00:57:33.562 "flush": true, 00:57:33.562 "reset": true, 00:57:33.562 "nvme_admin": true, 00:57:33.562 "nvme_io": true, 00:57:33.562 "nvme_io_md": false, 00:57:33.562 "write_zeroes": true, 00:57:33.562 "zcopy": false, 00:57:33.562 "get_zone_info": false, 00:57:33.562 "zone_management": false, 00:57:33.562 "zone_append": false, 00:57:33.562 "compare": true, 00:57:33.562 "compare_and_write": false, 00:57:33.562 "abort": true, 00:57:33.562 "seek_hole": false, 00:57:33.562 "seek_data": false, 00:57:33.562 "copy": true, 00:57:33.562 "nvme_iov_md": false 00:57:33.562 }, 00:57:33.562 "driver_specific": { 00:57:33.562 "nvme": [ 00:57:33.562 { 00:57:33.562 "pci_address": "0000:00:11.0", 00:57:33.562 "trid": { 00:57:33.562 "trtype": "PCIe", 00:57:33.562 "traddr": "0000:00:11.0" 00:57:33.562 }, 00:57:33.562 "ctrlr_data": { 00:57:33.562 "cntlid": 0, 00:57:33.562 "vendor_id": "0x1b36", 00:57:33.562 "model_number": "QEMU NVMe Ctrl", 00:57:33.562 "serial_number": "12341", 00:57:33.562 "firmware_revision": "8.0.0", 00:57:33.562 "subnqn": "nqn.2019-08.org.qemu:12341", 00:57:33.562 "oacs": { 00:57:33.562 "security": 0, 00:57:33.562 "format": 1, 00:57:33.562 "firmware": 0, 00:57:33.562 "ns_manage": 1 00:57:33.562 }, 00:57:33.562 "multi_ctrlr": false, 00:57:33.562 "ana_reporting": false 00:57:33.562 }, 00:57:33.562 "vs": { 00:57:33.562 "nvme_version": "1.4" 00:57:33.562 }, 00:57:33.562 "ns_data": { 00:57:33.562 "id": 1, 00:57:33.562 "can_share": false 00:57:33.562 } 00:57:33.562 } 00:57:33.562 ], 00:57:33.562 "mp_policy": "active_passive" 00:57:33.562 } 00:57:33.562 } 00:57:33.562 ]' 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:57:33.562 10:08:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:57:33.562 10:08:40 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:57:33.562 10:08:40 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:57:33.562 10:08:40 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:57:33.562 10:08:40 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:57:33.562 10:08:40 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:57:33.821 10:08:40 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c12fc8e8-ab9b-4f24-a352-0a61f2282c72 00:57:33.821 10:08:40 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:57:33.821 10:08:40 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c12fc8e8-ab9b-4f24-a352-0a61f2282c72 00:57:34.078 10:08:41 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:57:34.336 10:08:41 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=63098526-a835-47de-a344-ef77f1bd38e5 00:57:34.336 10:08:41 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 63098526-a835-47de-a344-ef77f1bd38e5 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=aab70419-22b2-488b-b44c-699744a2b809 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aab70419-22b2-488b-b44c-699744a2b809 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=aab70419-22b2-488b-b44c-699744a2b809 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:57:34.903 10:08:41 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size aab70419-22b2-488b-b44c-699744a2b809 00:57:34.903 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=aab70419-22b2-488b-b44c-699744a2b809 00:57:34.903 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:34.903 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:57:34.903 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:57:34.903 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aab70419-22b2-488b-b44c-699744a2b809 00:57:35.162 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:35.162 { 00:57:35.162 "name": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:35.162 "aliases": [ 00:57:35.162 "lvs/nvme0n1p0" 00:57:35.162 ], 00:57:35.162 "product_name": "Logical Volume", 00:57:35.162 "block_size": 4096, 00:57:35.162 "num_blocks": 26476544, 00:57:35.162 "uuid": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:35.162 "assigned_rate_limits": { 00:57:35.162 "rw_ios_per_sec": 0, 00:57:35.162 "rw_mbytes_per_sec": 0, 00:57:35.162 "r_mbytes_per_sec": 0, 00:57:35.162 "w_mbytes_per_sec": 0 00:57:35.162 }, 00:57:35.162 "claimed": false, 00:57:35.162 "zoned": false, 00:57:35.162 "supported_io_types": { 00:57:35.162 "read": true, 00:57:35.162 "write": true, 00:57:35.162 "unmap": true, 00:57:35.162 "flush": false, 00:57:35.162 "reset": true, 00:57:35.162 "nvme_admin": false, 00:57:35.162 "nvme_io": false, 00:57:35.162 "nvme_io_md": false, 00:57:35.162 "write_zeroes": true, 00:57:35.162 "zcopy": false, 00:57:35.162 "get_zone_info": false, 00:57:35.162 "zone_management": false, 00:57:35.162 "zone_append": false, 00:57:35.162 "compare": false, 00:57:35.162 "compare_and_write": false, 00:57:35.162 "abort": false, 00:57:35.162 "seek_hole": true, 00:57:35.162 "seek_data": true, 00:57:35.162 "copy": false, 00:57:35.162 "nvme_iov_md": false 00:57:35.162 }, 00:57:35.162 "driver_specific": { 00:57:35.162 "lvol": { 00:57:35.162 "lvol_store_uuid": "63098526-a835-47de-a344-ef77f1bd38e5", 00:57:35.162 "base_bdev": "nvme0n1", 00:57:35.162 "thin_provision": true, 00:57:35.162 "num_allocated_clusters": 0, 00:57:35.162 "snapshot": false, 00:57:35.162 "clone": false, 00:57:35.162 "esnap_clone": false 00:57:35.162 } 00:57:35.162 } 00:57:35.162 } 00:57:35.162 ]' 00:57:35.162 10:08:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:35.162 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:57:35.162 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:35.162 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:57:35.162 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:57:35.162 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:57:35.162 10:08:42 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:57:35.162 10:08:42 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:57:35.162 10:08:42 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:57:35.420 10:08:42 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:57:35.420 10:08:42 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:57:35.420 10:08:42 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size aab70419-22b2-488b-b44c-699744a2b809 00:57:35.420 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=aab70419-22b2-488b-b44c-699744a2b809 00:57:35.420 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:35.420 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:57:35.420 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:57:35.421 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aab70419-22b2-488b-b44c-699744a2b809 00:57:35.987 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:35.987 { 00:57:35.987 "name": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:35.987 "aliases": [ 00:57:35.987 "lvs/nvme0n1p0" 00:57:35.987 ], 00:57:35.987 "product_name": "Logical Volume", 00:57:35.987 "block_size": 4096, 00:57:35.987 "num_blocks": 26476544, 00:57:35.987 "uuid": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:35.987 "assigned_rate_limits": { 00:57:35.987 "rw_ios_per_sec": 0, 00:57:35.987 "rw_mbytes_per_sec": 0, 00:57:35.987 "r_mbytes_per_sec": 0, 00:57:35.987 "w_mbytes_per_sec": 0 00:57:35.987 }, 00:57:35.987 "claimed": false, 00:57:35.987 "zoned": false, 00:57:35.987 "supported_io_types": { 00:57:35.987 "read": true, 00:57:35.987 "write": true, 00:57:35.987 "unmap": true, 00:57:35.987 "flush": false, 00:57:35.987 "reset": true, 00:57:35.987 "nvme_admin": false, 00:57:35.987 "nvme_io": false, 00:57:35.987 "nvme_io_md": false, 00:57:35.987 "write_zeroes": true, 00:57:35.987 "zcopy": false, 00:57:35.987 "get_zone_info": false, 00:57:35.987 "zone_management": false, 00:57:35.987 "zone_append": false, 00:57:35.987 "compare": false, 00:57:35.987 "compare_and_write": false, 00:57:35.987 "abort": false, 00:57:35.987 "seek_hole": true, 00:57:35.987 "seek_data": true, 00:57:35.987 "copy": false, 00:57:35.987 "nvme_iov_md": false 00:57:35.987 }, 00:57:35.987 "driver_specific": { 00:57:35.987 "lvol": { 00:57:35.987 "lvol_store_uuid": "63098526-a835-47de-a344-ef77f1bd38e5", 00:57:35.987 "base_bdev": "nvme0n1", 00:57:35.987 "thin_provision": true, 00:57:35.987 "num_allocated_clusters": 0, 00:57:35.987 "snapshot": false, 00:57:35.987 "clone": false, 00:57:35.987 "esnap_clone": false 00:57:35.987 } 00:57:35.987 } 00:57:35.987 } 00:57:35.987 ]' 00:57:35.987 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:35.987 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:57:35.987 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:35.988 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:57:35.988 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:57:35.988 10:08:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:57:35.988 10:08:42 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:57:35.988 10:08:42 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:57:36.246 10:08:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:57:36.246 10:08:43 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:57:36.246 10:08:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size aab70419-22b2-488b-b44c-699744a2b809 00:57:36.246 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=aab70419-22b2-488b-b44c-699744a2b809 00:57:36.246 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:36.246 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:57:36.246 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:57:36.246 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aab70419-22b2-488b-b44c-699744a2b809 00:57:36.505 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:36.505 { 00:57:36.505 "name": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:36.505 "aliases": [ 00:57:36.505 "lvs/nvme0n1p0" 00:57:36.505 ], 00:57:36.505 "product_name": "Logical Volume", 00:57:36.505 "block_size": 4096, 00:57:36.505 "num_blocks": 26476544, 00:57:36.505 "uuid": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:36.505 "assigned_rate_limits": { 00:57:36.505 "rw_ios_per_sec": 0, 00:57:36.505 "rw_mbytes_per_sec": 0, 00:57:36.505 "r_mbytes_per_sec": 0, 00:57:36.505 "w_mbytes_per_sec": 0 00:57:36.505 }, 00:57:36.505 "claimed": false, 00:57:36.505 "zoned": false, 00:57:36.505 "supported_io_types": { 00:57:36.505 "read": true, 00:57:36.505 "write": true, 00:57:36.505 "unmap": true, 00:57:36.505 "flush": false, 00:57:36.505 "reset": true, 00:57:36.505 "nvme_admin": false, 00:57:36.505 "nvme_io": false, 00:57:36.505 "nvme_io_md": false, 00:57:36.505 "write_zeroes": true, 00:57:36.505 "zcopy": false, 00:57:36.505 "get_zone_info": false, 00:57:36.505 "zone_management": false, 00:57:36.505 "zone_append": false, 00:57:36.505 "compare": false, 00:57:36.505 "compare_and_write": false, 00:57:36.505 "abort": false, 00:57:36.505 "seek_hole": true, 00:57:36.505 "seek_data": true, 00:57:36.505 "copy": false, 00:57:36.505 "nvme_iov_md": false 00:57:36.505 }, 00:57:36.505 "driver_specific": { 00:57:36.505 "lvol": { 00:57:36.505 "lvol_store_uuid": "63098526-a835-47de-a344-ef77f1bd38e5", 00:57:36.505 "base_bdev": "nvme0n1", 00:57:36.505 "thin_provision": true, 00:57:36.505 "num_allocated_clusters": 0, 00:57:36.505 "snapshot": false, 00:57:36.505 "clone": false, 00:57:36.505 "esnap_clone": false 00:57:36.505 } 00:57:36.505 } 00:57:36.505 } 00:57:36.505 ]' 00:57:36.505 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:36.505 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:57:36.505 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:36.764 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:57:36.764 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:57:36.764 10:08:43 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:57:36.764 10:08:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:57:36.764 10:08:43 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aab70419-22b2-488b-b44c-699744a2b809 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:57:37.024 [2024-12-09 10:08:43.851794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.851862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:57:37.024 [2024-12-09 10:08:43.851891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:57:37.024 [2024-12-09 10:08:43.851905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.855809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.855854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:37.024 [2024-12-09 10:08:43.855876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.865 ms 00:57:37.024 [2024-12-09 10:08:43.855889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.856054] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:57:37.024 [2024-12-09 10:08:43.857043] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:57:37.024 [2024-12-09 10:08:43.857092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.857108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:37.024 [2024-12-09 10:08:43.857124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:57:37.024 [2024-12-09 10:08:43.857137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.857392] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 68115f76-6e25-4e17-a078-1a730c2e63d7 00:57:37.024 [2024-12-09 10:08:43.859304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.859352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:57:37.024 [2024-12-09 10:08:43.859371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:57:37.024 [2024-12-09 10:08:43.859387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.869445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.869517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:37.024 [2024-12-09 10:08:43.869549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.968 ms 00:57:37.024 [2024-12-09 10:08:43.869564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.869777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.869804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:37.024 [2024-12-09 10:08:43.869830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:57:37.024 [2024-12-09 10:08:43.869853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.869900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.869921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:57:37.024 [2024-12-09 10:08:43.869934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:57:37.024 [2024-12-09 10:08:43.869954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.869997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:57:37.024 [2024-12-09 10:08:43.875353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.875398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:37.024 [2024-12-09 10:08:43.875420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:57:37.024 [2024-12-09 10:08:43.875436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.875543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.875583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:57:37.024 [2024-12-09 10:08:43.875601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:57:37.024 [2024-12-09 10:08:43.875614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.875659] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:57:37.024 [2024-12-09 10:08:43.875824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:57:37.024 [2024-12-09 10:08:43.875851] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:57:37.024 [2024-12-09 10:08:43.875868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:57:37.024 [2024-12-09 10:08:43.875887] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:57:37.024 [2024-12-09 10:08:43.875901] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:57:37.024 [2024-12-09 10:08:43.875917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:57:37.024 [2024-12-09 10:08:43.875930] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:57:37.024 [2024-12-09 10:08:43.875951] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:57:37.024 [2024-12-09 10:08:43.875966] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:57:37.024 [2024-12-09 10:08:43.875982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.875995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:57:37.024 [2024-12-09 10:08:43.876011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:57:37.024 [2024-12-09 10:08:43.876023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.876143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.024 [2024-12-09 10:08:43.876159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:57:37.024 [2024-12-09 10:08:43.876175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:57:37.024 [2024-12-09 10:08:43.876187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.024 [2024-12-09 10:08:43.876354] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:57:37.024 [2024-12-09 10:08:43.876376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:57:37.024 [2024-12-09 10:08:43.876414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:37.024 [2024-12-09 10:08:43.876428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:57:37.024 [2024-12-09 10:08:43.876456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:57:37.024 [2024-12-09 10:08:43.876483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:57:37.024 [2024-12-09 10:08:43.876497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:37.024 [2024-12-09 10:08:43.876522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:57:37.024 [2024-12-09 10:08:43.876534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:57:37.024 [2024-12-09 10:08:43.876550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:37.024 [2024-12-09 10:08:43.876562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:57:37.024 [2024-12-09 10:08:43.876578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:57:37.024 [2024-12-09 10:08:43.876590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:57:37.024 [2024-12-09 10:08:43.876618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:57:37.024 [2024-12-09 10:08:43.876632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:57:37.024 [2024-12-09 10:08:43.876658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:37.024 [2024-12-09 10:08:43.876684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:57:37.024 [2024-12-09 10:08:43.876695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:57:37.024 [2024-12-09 10:08:43.876709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:37.024 [2024-12-09 10:08:43.876720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:57:37.024 [2024-12-09 10:08:43.876734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:57:37.025 [2024-12-09 10:08:43.876746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:37.025 [2024-12-09 10:08:43.876760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:57:37.025 [2024-12-09 10:08:43.876771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:57:37.025 [2024-12-09 10:08:43.876786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:37.025 [2024-12-09 10:08:43.876798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:57:37.025 [2024-12-09 10:08:43.876814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:57:37.025 [2024-12-09 10:08:43.876826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:37.025 [2024-12-09 10:08:43.876840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:57:37.025 [2024-12-09 10:08:43.876851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:57:37.025 [2024-12-09 10:08:43.876866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:37.025 [2024-12-09 10:08:43.876877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:57:37.025 [2024-12-09 10:08:43.876894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:57:37.025 [2024-12-09 10:08:43.876906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:37.025 [2024-12-09 10:08:43.876920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:57:37.025 [2024-12-09 10:08:43.876932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:57:37.025 [2024-12-09 10:08:43.876953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:37.025 [2024-12-09 10:08:43.876964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:57:37.025 [2024-12-09 10:08:43.876979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:57:37.025 [2024-12-09 10:08:43.876999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:37.025 [2024-12-09 10:08:43.877014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:37.025 [2024-12-09 10:08:43.877026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:57:37.025 [2024-12-09 10:08:43.877052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:57:37.025 [2024-12-09 10:08:43.877064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:57:37.025 [2024-12-09 10:08:43.877078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:57:37.025 [2024-12-09 10:08:43.877089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:57:37.025 [2024-12-09 10:08:43.877104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:57:37.025 [2024-12-09 10:08:43.877117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:57:37.025 [2024-12-09 10:08:43.877136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:57:37.025 [2024-12-09 10:08:43.877169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:57:37.025 [2024-12-09 10:08:43.877182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:57:37.025 [2024-12-09 10:08:43.877197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:57:37.025 [2024-12-09 10:08:43.877209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:57:37.025 [2024-12-09 10:08:43.877224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:57:37.025 [2024-12-09 10:08:43.877236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:57:37.025 [2024-12-09 10:08:43.877265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:57:37.025 [2024-12-09 10:08:43.877281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:57:37.025 [2024-12-09 10:08:43.877301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:57:37.025 [2024-12-09 10:08:43.877368] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:57:37.025 [2024-12-09 10:08:43.877389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:57:37.025 [2024-12-09 10:08:43.877418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:57:37.025 [2024-12-09 10:08:43.877431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:57:37.025 [2024-12-09 10:08:43.877447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:57:37.025 [2024-12-09 10:08:43.877460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:37.025 [2024-12-09 10:08:43.877475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:57:37.025 [2024-12-09 10:08:43.877488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:57:37.025 [2024-12-09 10:08:43.877504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:37.025 [2024-12-09 10:08:43.877602] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:57:37.025 [2024-12-09 10:08:43.877638] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:57:39.557 [2024-12-09 10:08:46.463438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.463784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:57:39.557 [2024-12-09 10:08:46.463820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2585.845 ms 00:57:39.557 [2024-12-09 10:08:46.463839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.504101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.504172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:39.557 [2024-12-09 10:08:46.504195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.878 ms 00:57:39.557 [2024-12-09 10:08:46.504212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.504473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.504500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:57:39.557 [2024-12-09 10:08:46.504539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:57:39.557 [2024-12-09 10:08:46.504561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.559948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.560026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:39.557 [2024-12-09 10:08:46.560049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.334 ms 00:57:39.557 [2024-12-09 10:08:46.560068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.560212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.560238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:39.557 [2024-12-09 10:08:46.560282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:57:39.557 [2024-12-09 10:08:46.560302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.560930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.560975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:39.557 [2024-12-09 10:08:46.560992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:57:39.557 [2024-12-09 10:08:46.561008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.561185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.561206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:39.557 [2024-12-09 10:08:46.561245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:57:39.557 [2024-12-09 10:08:46.561290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.583528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.557 [2024-12-09 10:08:46.583597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:39.557 [2024-12-09 10:08:46.583618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.183 ms 00:57:39.557 [2024-12-09 10:08:46.583636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.557 [2024-12-09 10:08:46.598358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:57:39.816 [2024-12-09 10:08:46.620855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.620936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:57:39.816 [2024-12-09 10:08:46.620964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.020 ms 00:57:39.816 [2024-12-09 10:08:46.620979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.816 [2024-12-09 10:08:46.703273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.703357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:57:39.816 [2024-12-09 10:08:46.703385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.080 ms 00:57:39.816 [2024-12-09 10:08:46.703399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.816 [2024-12-09 10:08:46.703705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.703728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:57:39.816 [2024-12-09 10:08:46.703750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:57:39.816 [2024-12-09 10:08:46.703763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.816 [2024-12-09 10:08:46.735043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.735091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:57:39.816 [2024-12-09 10:08:46.735114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.236 ms 00:57:39.816 [2024-12-09 10:08:46.735128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.816 [2024-12-09 10:08:46.765512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.765562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:57:39.816 [2024-12-09 10:08:46.765586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.270 ms 00:57:39.816 [2024-12-09 10:08:46.765600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.816 [2024-12-09 10:08:46.766574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.766602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:57:39.816 [2024-12-09 10:08:46.766621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:57:39.816 [2024-12-09 10:08:46.766634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:39.816 [2024-12-09 10:08:46.853086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:39.816 [2024-12-09 10:08:46.853201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:57:39.816 [2024-12-09 10:08:46.853232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.396 ms 00:57:39.816 [2024-12-09 10:08:46.853265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:40.075 [2024-12-09 10:08:46.885882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:40.075 [2024-12-09 10:08:46.885933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:57:40.075 [2024-12-09 10:08:46.885957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.460 ms 00:57:40.075 [2024-12-09 10:08:46.885971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:40.075 [2024-12-09 10:08:46.916857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:40.075 [2024-12-09 10:08:46.916903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:57:40.075 [2024-12-09 10:08:46.916925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.770 ms 00:57:40.075 [2024-12-09 10:08:46.916938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:40.075 [2024-12-09 10:08:46.948091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:40.075 [2024-12-09 10:08:46.948153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:57:40.075 [2024-12-09 10:08:46.948182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.045 ms 00:57:40.075 [2024-12-09 10:08:46.948196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:40.075 [2024-12-09 10:08:46.948340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:40.075 [2024-12-09 10:08:46.948366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:57:40.075 [2024-12-09 10:08:46.948387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:57:40.075 [2024-12-09 10:08:46.948401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:40.075 [2024-12-09 10:08:46.948504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:40.075 [2024-12-09 10:08:46.948523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:57:40.075 [2024-12-09 10:08:46.948539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:57:40.075 [2024-12-09 10:08:46.948551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:40.075 [2024-12-09 10:08:46.949808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:57:40.075 [2024-12-09 10:08:46.953899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3097.689 ms, result 0 00:57:40.075 [2024-12-09 10:08:46.954802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:57:40.075 { 00:57:40.075 "name": "ftl0", 00:57:40.075 "uuid": "68115f76-6e25-4e17-a078-1a730c2e63d7" 00:57:40.075 } 00:57:40.075 10:08:46 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:57:40.075 10:08:46 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:57:40.075 10:08:46 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:57:40.075 10:08:46 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:57:40.075 10:08:46 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:57:40.075 10:08:46 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:57:40.075 10:08:46 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:57:40.334 10:08:47 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:57:40.592 [ 00:57:40.592 { 00:57:40.592 "name": "ftl0", 00:57:40.592 "aliases": [ 00:57:40.592 "68115f76-6e25-4e17-a078-1a730c2e63d7" 00:57:40.592 ], 00:57:40.592 "product_name": "FTL disk", 00:57:40.592 "block_size": 4096, 00:57:40.592 "num_blocks": 23592960, 00:57:40.592 "uuid": "68115f76-6e25-4e17-a078-1a730c2e63d7", 00:57:40.592 "assigned_rate_limits": { 00:57:40.592 "rw_ios_per_sec": 0, 00:57:40.592 "rw_mbytes_per_sec": 0, 00:57:40.592 "r_mbytes_per_sec": 0, 00:57:40.592 "w_mbytes_per_sec": 0 00:57:40.592 }, 00:57:40.592 "claimed": false, 00:57:40.592 "zoned": false, 00:57:40.592 "supported_io_types": { 00:57:40.592 "read": true, 00:57:40.592 "write": true, 00:57:40.592 "unmap": true, 00:57:40.592 "flush": true, 00:57:40.592 "reset": false, 00:57:40.592 "nvme_admin": false, 00:57:40.592 "nvme_io": false, 00:57:40.592 "nvme_io_md": false, 00:57:40.592 "write_zeroes": true, 00:57:40.592 "zcopy": false, 00:57:40.592 "get_zone_info": false, 00:57:40.592 "zone_management": false, 00:57:40.592 "zone_append": false, 00:57:40.592 "compare": false, 00:57:40.592 "compare_and_write": false, 00:57:40.592 "abort": false, 00:57:40.592 "seek_hole": false, 00:57:40.592 "seek_data": false, 00:57:40.592 "copy": false, 00:57:40.592 "nvme_iov_md": false 00:57:40.592 }, 00:57:40.593 "driver_specific": { 00:57:40.593 "ftl": { 00:57:40.593 "base_bdev": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:40.593 "cache": "nvc0n1p0" 00:57:40.593 } 00:57:40.593 } 00:57:40.593 } 00:57:40.593 ] 00:57:40.593 10:08:47 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:57:40.593 10:08:47 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:57:40.593 10:08:47 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:57:41.160 10:08:47 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:57:41.160 10:08:47 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:57:41.419 10:08:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:57:41.419 { 00:57:41.419 "name": "ftl0", 00:57:41.419 "aliases": [ 00:57:41.419 "68115f76-6e25-4e17-a078-1a730c2e63d7" 00:57:41.419 ], 00:57:41.419 "product_name": "FTL disk", 00:57:41.419 "block_size": 4096, 00:57:41.419 "num_blocks": 23592960, 00:57:41.419 "uuid": "68115f76-6e25-4e17-a078-1a730c2e63d7", 00:57:41.419 "assigned_rate_limits": { 00:57:41.419 "rw_ios_per_sec": 0, 00:57:41.419 "rw_mbytes_per_sec": 0, 00:57:41.419 "r_mbytes_per_sec": 0, 00:57:41.419 "w_mbytes_per_sec": 0 00:57:41.419 }, 00:57:41.419 "claimed": false, 00:57:41.419 "zoned": false, 00:57:41.419 "supported_io_types": { 00:57:41.419 "read": true, 00:57:41.419 "write": true, 00:57:41.419 "unmap": true, 00:57:41.419 "flush": true, 00:57:41.419 "reset": false, 00:57:41.419 "nvme_admin": false, 00:57:41.419 "nvme_io": false, 00:57:41.419 "nvme_io_md": false, 00:57:41.419 "write_zeroes": true, 00:57:41.419 "zcopy": false, 00:57:41.419 "get_zone_info": false, 00:57:41.419 "zone_management": false, 00:57:41.419 "zone_append": false, 00:57:41.419 "compare": false, 00:57:41.419 "compare_and_write": false, 00:57:41.419 "abort": false, 00:57:41.419 "seek_hole": false, 00:57:41.419 "seek_data": false, 00:57:41.419 "copy": false, 00:57:41.419 "nvme_iov_md": false 00:57:41.419 }, 00:57:41.419 "driver_specific": { 00:57:41.419 "ftl": { 00:57:41.419 "base_bdev": "aab70419-22b2-488b-b44c-699744a2b809", 00:57:41.419 "cache": "nvc0n1p0" 00:57:41.419 } 00:57:41.419 } 00:57:41.419 } 00:57:41.419 ]' 00:57:41.419 10:08:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:57:41.419 10:08:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:57:41.419 10:08:48 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:57:41.678 [2024-12-09 10:08:48.591203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.591286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:57:41.678 [2024-12-09 10:08:48.591315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:57:41.678 [2024-12-09 10:08:48.591337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.591385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:57:41.678 [2024-12-09 10:08:48.595087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.595123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:57:41.678 [2024-12-09 10:08:48.595146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:57:41.678 [2024-12-09 10:08:48.595159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.595744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.595780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:57:41.678 [2024-12-09 10:08:48.595800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:57:41.678 [2024-12-09 10:08:48.595813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.599455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.599491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:57:41.678 [2024-12-09 10:08:48.599511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.601 ms 00:57:41.678 [2024-12-09 10:08:48.599524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.607005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.607043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:57:41.678 [2024-12-09 10:08:48.607062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.403 ms 00:57:41.678 [2024-12-09 10:08:48.607075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.639992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.640193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:57:41.678 [2024-12-09 10:08:48.640233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.823 ms 00:57:41.678 [2024-12-09 10:08:48.640269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.659285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.659337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:57:41.678 [2024-12-09 10:08:48.659362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:57:41.678 [2024-12-09 10:08:48.659380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.659641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.659664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:57:41.678 [2024-12-09 10:08:48.659683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:57:41.678 [2024-12-09 10:08:48.659696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.678 [2024-12-09 10:08:48.691391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.678 [2024-12-09 10:08:48.691463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:57:41.678 [2024-12-09 10:08:48.691501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.648 ms 00:57:41.678 [2024-12-09 10:08:48.691514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.939 [2024-12-09 10:08:48.722789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.939 [2024-12-09 10:08:48.722838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:57:41.939 [2024-12-09 10:08:48.722864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.139 ms 00:57:41.939 [2024-12-09 10:08:48.722878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.939 [2024-12-09 10:08:48.753286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.939 [2024-12-09 10:08:48.753334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:57:41.939 [2024-12-09 10:08:48.753371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.292 ms 00:57:41.939 [2024-12-09 10:08:48.753384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.939 [2024-12-09 10:08:48.783463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.939 [2024-12-09 10:08:48.783526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:57:41.939 [2024-12-09 10:08:48.783567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.909 ms 00:57:41.939 [2024-12-09 10:08:48.783579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.939 [2024-12-09 10:08:48.783702] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:57:41.939 [2024-12-09 10:08:48.783730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.783997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:57:41.939 [2024-12-09 10:08:48.784482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.784989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:57:41.940 [2024-12-09 10:08:48.785370] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:57:41.940 [2024-12-09 10:08:48.785388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:57:41.940 [2024-12-09 10:08:48.785401] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:57:41.940 [2024-12-09 10:08:48.785415] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:57:41.940 [2024-12-09 10:08:48.785427] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:57:41.940 [2024-12-09 10:08:48.785446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:57:41.940 [2024-12-09 10:08:48.785458] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:57:41.940 [2024-12-09 10:08:48.785472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:57:41.940 [2024-12-09 10:08:48.785484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:57:41.940 [2024-12-09 10:08:48.785497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:57:41.940 [2024-12-09 10:08:48.785507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:57:41.940 [2024-12-09 10:08:48.785522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.940 [2024-12-09 10:08:48.785533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:57:41.940 [2024-12-09 10:08:48.785549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:57:41.940 [2024-12-09 10:08:48.785561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.802577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.940 [2024-12-09 10:08:48.802624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:57:41.940 [2024-12-09 10:08:48.802664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.972 ms 00:57:41.940 [2024-12-09 10:08:48.802677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.803231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:41.940 [2024-12-09 10:08:48.803276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:57:41.940 [2024-12-09 10:08:48.803314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:57:41.940 [2024-12-09 10:08:48.803327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.862677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:41.940 [2024-12-09 10:08:48.862752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:41.940 [2024-12-09 10:08:48.862793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:41.940 [2024-12-09 10:08:48.862806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.863014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:41.940 [2024-12-09 10:08:48.863034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:41.940 [2024-12-09 10:08:48.863051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:41.940 [2024-12-09 10:08:48.863065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.863161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:41.940 [2024-12-09 10:08:48.863181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:41.940 [2024-12-09 10:08:48.863206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:41.940 [2024-12-09 10:08:48.863219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.863285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:41.940 [2024-12-09 10:08:48.863303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:41.940 [2024-12-09 10:08:48.863319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:41.940 [2024-12-09 10:08:48.863333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:41.940 [2024-12-09 10:08:48.981120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:41.940 [2024-12-09 10:08:48.981196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:41.940 [2024-12-09 10:08:48.981237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:41.940 [2024-12-09 10:08:48.981251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.068531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.068618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:42.231 [2024-12-09 10:08:49.068657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.068671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.068838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.068858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:42.231 [2024-12-09 10:08:49.068878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.068894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.068959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.068974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:42.231 [2024-12-09 10:08:49.068989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.069000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.069174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.069194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:42.231 [2024-12-09 10:08:49.069210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.069226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.069369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.069391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:57:42.231 [2024-12-09 10:08:49.069408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.069421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.069491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.069508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:42.231 [2024-12-09 10:08:49.069527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.069540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.069660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:42.231 [2024-12-09 10:08:49.069679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:42.231 [2024-12-09 10:08:49.069713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:42.231 [2024-12-09 10:08:49.069726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:42.231 [2024-12-09 10:08:49.069999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 478.766 ms, result 0 00:57:42.231 true 00:57:42.231 10:08:49 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78569 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78569 ']' 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78569 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78569 00:57:42.231 killing process with pid 78569 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78569' 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78569 00:57:42.231 10:08:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78569 00:57:47.521 10:08:53 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:57:48.088 65536+0 records in 00:57:48.088 65536+0 records out 00:57:48.088 268435456 bytes (268 MB, 256 MiB) copied, 1.17602 s, 228 MB/s 00:57:48.088 10:08:55 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:57:48.346 [2024-12-09 10:08:55.237502] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:48.346 [2024-12-09 10:08:55.237713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78775 ] 00:57:48.603 [2024-12-09 10:08:55.418892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:48.603 [2024-12-09 10:08:55.580798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:49.170 [2024-12-09 10:08:55.975540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:57:49.170 [2024-12-09 10:08:55.975675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:57:49.170 [2024-12-09 10:08:56.145849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.170 [2024-12-09 10:08:56.145918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:57:49.170 [2024-12-09 10:08:56.145944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:57:49.170 [2024-12-09 10:08:56.145956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.170 [2024-12-09 10:08:56.149630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.170 [2024-12-09 10:08:56.149676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:49.170 [2024-12-09 10:08:56.149694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:57:49.170 [2024-12-09 10:08:56.149706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.170 [2024-12-09 10:08:56.149856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:57:49.170 [2024-12-09 10:08:56.150822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:57:49.170 [2024-12-09 10:08:56.151011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.170 [2024-12-09 10:08:56.151033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:49.170 [2024-12-09 10:08:56.151048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:57:49.170 [2024-12-09 10:08:56.151060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.170 [2024-12-09 10:08:56.153354] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:57:49.170 [2024-12-09 10:08:56.170869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.170 [2024-12-09 10:08:56.170916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:57:49.170 [2024-12-09 10:08:56.170935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.516 ms 00:57:49.170 [2024-12-09 10:08:56.170948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.170 [2024-12-09 10:08:56.171070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.170 [2024-12-09 10:08:56.171093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:57:49.171 [2024-12-09 10:08:56.171107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:57:49.171 [2024-12-09 10:08:56.171119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.179893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.180197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:49.171 [2024-12-09 10:08:56.180227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.714 ms 00:57:49.171 [2024-12-09 10:08:56.180241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.180407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.180430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:49.171 [2024-12-09 10:08:56.180443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:57:49.171 [2024-12-09 10:08:56.180455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.180501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.180518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:57:49.171 [2024-12-09 10:08:56.180531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:57:49.171 [2024-12-09 10:08:56.180542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.180575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:57:49.171 [2024-12-09 10:08:56.185588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.185629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:49.171 [2024-12-09 10:08:56.185646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:57:49.171 [2024-12-09 10:08:56.185658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.185754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.185774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:57:49.171 [2024-12-09 10:08:56.185788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:57:49.171 [2024-12-09 10:08:56.185803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.185868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:57:49.171 [2024-12-09 10:08:56.185924] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:57:49.171 [2024-12-09 10:08:56.185981] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:57:49.171 [2024-12-09 10:08:56.186009] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:57:49.171 [2024-12-09 10:08:56.186129] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:57:49.171 [2024-12-09 10:08:56.186152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:57:49.171 [2024-12-09 10:08:56.186168] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:57:49.171 [2024-12-09 10:08:56.186189] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:57:49.171 [2024-12-09 10:08:56.186227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:57:49.171 [2024-12-09 10:08:56.186238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:57:49.171 [2024-12-09 10:08:56.186275] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:57:49.171 [2024-12-09 10:08:56.186293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.186305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:57:49.171 [2024-12-09 10:08:56.186318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:57:49.171 [2024-12-09 10:08:56.186342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.186449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.171 [2024-12-09 10:08:56.186471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:57:49.171 [2024-12-09 10:08:56.186484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:57:49.171 [2024-12-09 10:08:56.186495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.171 [2024-12-09 10:08:56.186614] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:57:49.171 [2024-12-09 10:08:56.186632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:57:49.171 [2024-12-09 10:08:56.186645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:57:49.171 [2024-12-09 10:08:56.186679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:57:49.171 [2024-12-09 10:08:56.186710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:49.171 [2024-12-09 10:08:56.186731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:57:49.171 [2024-12-09 10:08:56.186754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:57:49.171 [2024-12-09 10:08:56.186765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:49.171 [2024-12-09 10:08:56.186775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:57:49.171 [2024-12-09 10:08:56.186786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:57:49.171 [2024-12-09 10:08:56.186797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:57:49.171 [2024-12-09 10:08:56.186817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:57:49.171 [2024-12-09 10:08:56.186846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:57:49.171 [2024-12-09 10:08:56.186877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:57:49.171 [2024-12-09 10:08:56.186906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:57:49.171 [2024-12-09 10:08:56.186936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:49.171 [2024-12-09 10:08:56.186956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:57:49.171 [2024-12-09 10:08:56.186965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:57:49.171 [2024-12-09 10:08:56.186975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:49.171 [2024-12-09 10:08:56.186985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:57:49.171 [2024-12-09 10:08:56.186995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:57:49.171 [2024-12-09 10:08:56.187005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:49.171 [2024-12-09 10:08:56.187016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:57:49.171 [2024-12-09 10:08:56.187026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:57:49.171 [2024-12-09 10:08:56.187036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.187046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:57:49.171 [2024-12-09 10:08:56.187057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:57:49.171 [2024-12-09 10:08:56.187068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.187079] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:57:49.171 [2024-12-09 10:08:56.187090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:57:49.171 [2024-12-09 10:08:56.187107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:49.171 [2024-12-09 10:08:56.187119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:49.171 [2024-12-09 10:08:56.187131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:57:49.171 [2024-12-09 10:08:56.187141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:57:49.171 [2024-12-09 10:08:56.187151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:57:49.171 [2024-12-09 10:08:56.187162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:57:49.171 [2024-12-09 10:08:56.187172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:57:49.171 [2024-12-09 10:08:56.187183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:57:49.171 [2024-12-09 10:08:56.187196] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:57:49.171 [2024-12-09 10:08:56.187210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:49.171 [2024-12-09 10:08:56.187223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:57:49.171 [2024-12-09 10:08:56.187235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:57:49.171 [2024-12-09 10:08:56.187246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:57:49.171 [2024-12-09 10:08:56.187277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:57:49.172 [2024-12-09 10:08:56.187289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:57:49.172 [2024-12-09 10:08:56.187301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:57:49.172 [2024-12-09 10:08:56.187313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:57:49.172 [2024-12-09 10:08:56.187325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:57:49.172 [2024-12-09 10:08:56.187336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:57:49.172 [2024-12-09 10:08:56.187347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:57:49.172 [2024-12-09 10:08:56.187359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:57:49.172 [2024-12-09 10:08:56.187370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:57:49.172 [2024-12-09 10:08:56.187382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:57:49.172 [2024-12-09 10:08:56.187394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:57:49.172 [2024-12-09 10:08:56.187407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:57:49.172 [2024-12-09 10:08:56.187420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:49.172 [2024-12-09 10:08:56.187433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:57:49.172 [2024-12-09 10:08:56.187446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:57:49.172 [2024-12-09 10:08:56.187458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:57:49.172 [2024-12-09 10:08:56.187470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:57:49.172 [2024-12-09 10:08:56.187484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.172 [2024-12-09 10:08:56.187501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:57:49.172 [2024-12-09 10:08:56.187513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:57:49.172 [2024-12-09 10:08:56.187524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.229574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.229657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:49.430 [2024-12-09 10:08:56.229678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.965 ms 00:57:49.430 [2024-12-09 10:08:56.229691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.229926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.229954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:57:49.430 [2024-12-09 10:08:56.229970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:57:49.430 [2024-12-09 10:08:56.229982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.291670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.291734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:49.430 [2024-12-09 10:08:56.291792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.649 ms 00:57:49.430 [2024-12-09 10:08:56.291805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.291982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.292003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:49.430 [2024-12-09 10:08:56.292019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:57:49.430 [2024-12-09 10:08:56.292031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.292652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.292687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:49.430 [2024-12-09 10:08:56.292719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:57:49.430 [2024-12-09 10:08:56.292731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.292908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.292933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:49.430 [2024-12-09 10:08:56.292948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:57:49.430 [2024-12-09 10:08:56.292958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.316633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.316681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:49.430 [2024-12-09 10:08:56.316699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.642 ms 00:57:49.430 [2024-12-09 10:08:56.316711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.335363] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:57:49.430 [2024-12-09 10:08:56.335413] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:57:49.430 [2024-12-09 10:08:56.335432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.335445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:57:49.430 [2024-12-09 10:08:56.335458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.552 ms 00:57:49.430 [2024-12-09 10:08:56.335470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.368153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.368214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:57:49.430 [2024-12-09 10:08:56.368248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.582 ms 00:57:49.430 [2024-12-09 10:08:56.368261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.385693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.385748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:57:49.430 [2024-12-09 10:08:56.385786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.322 ms 00:57:49.430 [2024-12-09 10:08:56.385797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.402101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.402148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:57:49.430 [2024-12-09 10:08:56.402165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.190 ms 00:57:49.430 [2024-12-09 10:08:56.402177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.430 [2024-12-09 10:08:56.403077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.430 [2024-12-09 10:08:56.403119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:57:49.430 [2024-12-09 10:08:56.403136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:57:49.430 [2024-12-09 10:08:56.403148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.487567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.487640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:57:49.689 [2024-12-09 10:08:56.487662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.379 ms 00:57:49.689 [2024-12-09 10:08:56.487675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.500849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:57:49.689 [2024-12-09 10:08:56.523082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.523322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:57:49.689 [2024-12-09 10:08:56.523359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.255 ms 00:57:49.689 [2024-12-09 10:08:56.523375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.523594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.523619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:57:49.689 [2024-12-09 10:08:56.523634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:57:49.689 [2024-12-09 10:08:56.523646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.523742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.523766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:57:49.689 [2024-12-09 10:08:56.523785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:57:49.689 [2024-12-09 10:08:56.523797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.523864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.523892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:57:49.689 [2024-12-09 10:08:56.523905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:57:49.689 [2024-12-09 10:08:56.523917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.523976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:57:49.689 [2024-12-09 10:08:56.523996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.524008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:57:49.689 [2024-12-09 10:08:56.524020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:57:49.689 [2024-12-09 10:08:56.524031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.557482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.557531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:57:49.689 [2024-12-09 10:08:56.557550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.421 ms 00:57:49.689 [2024-12-09 10:08:56.557563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.557705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:49.689 [2024-12-09 10:08:56.557726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:57:49.689 [2024-12-09 10:08:56.557740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:57:49.689 [2024-12-09 10:08:56.557752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:49.689 [2024-12-09 10:08:56.559115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:57:49.689 [2024-12-09 10:08:56.563586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.958 ms, result 0 00:57:49.689 [2024-12-09 10:08:56.564446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:57:49.689 [2024-12-09 10:08:56.581074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:57:50.624  [2024-12-09T10:08:58.600Z] Copying: 25/256 [MB] (25 MBps) [2024-12-09T10:08:59.975Z] Copying: 49/256 [MB] (24 MBps) [2024-12-09T10:09:00.908Z] Copying: 74/256 [MB] (24 MBps) [2024-12-09T10:09:01.861Z] Copying: 98/256 [MB] (23 MBps) [2024-12-09T10:09:02.794Z] Copying: 121/256 [MB] (23 MBps) [2024-12-09T10:09:03.727Z] Copying: 146/256 [MB] (24 MBps) [2024-12-09T10:09:04.660Z] Copying: 171/256 [MB] (24 MBps) [2024-12-09T10:09:05.610Z] Copying: 195/256 [MB] (24 MBps) [2024-12-09T10:09:06.984Z] Copying: 218/256 [MB] (23 MBps) [2024-12-09T10:09:07.244Z] Copying: 242/256 [MB] (23 MBps) [2024-12-09T10:09:07.244Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-09 10:09:07.192147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:00.200 [2024-12-09 10:09:07.205158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.200 [2024-12-09 10:09:07.205235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:00.200 [2024-12-09 10:09:07.205276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:58:00.200 [2024-12-09 10:09:07.205302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.200 [2024-12-09 10:09:07.205342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:58:00.200 [2024-12-09 10:09:07.209025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.200 [2024-12-09 10:09:07.209071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:00.200 [2024-12-09 10:09:07.209089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:58:00.200 [2024-12-09 10:09:07.209101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.200 [2024-12-09 10:09:07.211078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.200 [2024-12-09 10:09:07.211280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:00.200 [2024-12-09 10:09:07.211311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:58:00.200 [2024-12-09 10:09:07.211324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.200 [2024-12-09 10:09:07.218950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.200 [2024-12-09 10:09:07.219190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:00.200 [2024-12-09 10:09:07.219218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.576 ms 00:58:00.200 [2024-12-09 10:09:07.219232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.200 [2024-12-09 10:09:07.226670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.200 [2024-12-09 10:09:07.226853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:00.200 [2024-12-09 10:09:07.227009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.338 ms 00:58:00.200 [2024-12-09 10:09:07.227058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.461 [2024-12-09 10:09:07.255265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.461 [2024-12-09 10:09:07.255502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:00.461 [2024-12-09 10:09:07.255639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.975 ms 00:58:00.461 [2024-12-09 10:09:07.255771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.461 [2024-12-09 10:09:07.272026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.461 [2024-12-09 10:09:07.272230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:00.461 [2024-12-09 10:09:07.272386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.151 ms 00:58:00.461 [2024-12-09 10:09:07.272514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.461 [2024-12-09 10:09:07.272721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.461 [2024-12-09 10:09:07.272790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:00.461 [2024-12-09 10:09:07.272909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:58:00.462 [2024-12-09 10:09:07.273040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.462 [2024-12-09 10:09:07.301683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.462 [2024-12-09 10:09:07.301959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:00.462 [2024-12-09 10:09:07.302075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.577 ms 00:58:00.462 [2024-12-09 10:09:07.302124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.462 [2024-12-09 10:09:07.333839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.462 [2024-12-09 10:09:07.334142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:00.462 [2024-12-09 10:09:07.334283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.504 ms 00:58:00.462 [2024-12-09 10:09:07.334428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.462 [2024-12-09 10:09:07.363318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.462 [2024-12-09 10:09:07.363518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:00.462 [2024-12-09 10:09:07.363643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.730 ms 00:58:00.462 [2024-12-09 10:09:07.363692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.462 [2024-12-09 10:09:07.393800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.462 [2024-12-09 10:09:07.394014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:00.462 [2024-12-09 10:09:07.394143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.909 ms 00:58:00.462 [2024-12-09 10:09:07.394311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.462 [2024-12-09 10:09:07.394394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:00.462 [2024-12-09 10:09:07.394422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.394998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.395982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:00.462 [2024-12-09 10:09:07.396042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:00.463 [2024-12-09 10:09:07.396936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:00.463 [2024-12-09 10:09:07.396949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:58:00.463 [2024-12-09 10:09:07.396961] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:00.463 [2024-12-09 10:09:07.396972] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:00.463 [2024-12-09 10:09:07.396983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:00.463 [2024-12-09 10:09:07.396995] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:00.463 [2024-12-09 10:09:07.397005] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:00.463 [2024-12-09 10:09:07.397017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:00.463 [2024-12-09 10:09:07.397027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:00.463 [2024-12-09 10:09:07.397037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:00.463 [2024-12-09 10:09:07.397047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:00.463 [2024-12-09 10:09:07.397076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.463 [2024-12-09 10:09:07.397093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:00.463 [2024-12-09 10:09:07.397106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:58:00.463 [2024-12-09 10:09:07.397119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.463 [2024-12-09 10:09:07.413274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.463 [2024-12-09 10:09:07.413351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:00.463 [2024-12-09 10:09:07.413384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.122 ms 00:58:00.463 [2024-12-09 10:09:07.413396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.463 [2024-12-09 10:09:07.413929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:00.463 [2024-12-09 10:09:07.413961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:00.463 [2024-12-09 10:09:07.413976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:58:00.463 [2024-12-09 10:09:07.413988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.463 [2024-12-09 10:09:07.459105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.463 [2024-12-09 10:09:07.459158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:00.463 [2024-12-09 10:09:07.459190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.463 [2024-12-09 10:09:07.459201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.463 [2024-12-09 10:09:07.459401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.463 [2024-12-09 10:09:07.459422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:00.463 [2024-12-09 10:09:07.459436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.463 [2024-12-09 10:09:07.459447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.463 [2024-12-09 10:09:07.459516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.463 [2024-12-09 10:09:07.459534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:00.463 [2024-12-09 10:09:07.459546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.463 [2024-12-09 10:09:07.459557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.463 [2024-12-09 10:09:07.459582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.463 [2024-12-09 10:09:07.459609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:00.463 [2024-12-09 10:09:07.459653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.463 [2024-12-09 10:09:07.459665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.557076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.557152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:00.722 [2024-12-09 10:09:07.557187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.557199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.635633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.635705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:00.722 [2024-12-09 10:09:07.635739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.635752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.635870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.635888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:00.722 [2024-12-09 10:09:07.635900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.635927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.635962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.635975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:00.722 [2024-12-09 10:09:07.636011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.636021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.636144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.636163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:00.722 [2024-12-09 10:09:07.636176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.636186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.636236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.636252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:00.722 [2024-12-09 10:09:07.636264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.636282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.636406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.636423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:00.722 [2024-12-09 10:09:07.636451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.636462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.636517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:00.722 [2024-12-09 10:09:07.636533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:00.722 [2024-12-09 10:09:07.636551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:00.722 [2024-12-09 10:09:07.636561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:00.722 [2024-12-09 10:09:07.636779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 431.592 ms, result 0 00:58:01.656 00:58:01.656 00:58:01.915 10:09:08 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78911 00:58:01.916 10:09:08 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:58:01.916 10:09:08 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78911 00:58:01.916 10:09:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78911 ']' 00:58:01.916 10:09:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:01.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:01.916 10:09:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:01.916 10:09:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:01.916 10:09:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:01.916 10:09:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:58:01.916 [2024-12-09 10:09:08.841124] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:01.916 [2024-12-09 10:09:08.841632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78911 ] 00:58:02.175 [2024-12-09 10:09:09.031793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:02.175 [2024-12-09 10:09:09.173054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:03.112 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:03.112 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:58:03.112 10:09:10 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:58:03.371 [2024-12-09 10:09:10.358556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:03.371 [2024-12-09 10:09:10.358649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:03.631 [2024-12-09 10:09:10.547932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.631 [2024-12-09 10:09:10.548184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:58:03.631 [2024-12-09 10:09:10.548229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:58:03.631 [2024-12-09 10:09:10.548245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.631 [2024-12-09 10:09:10.552764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.631 [2024-12-09 10:09:10.552811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:03.631 [2024-12-09 10:09:10.552831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.469 ms 00:58:03.631 [2024-12-09 10:09:10.552843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.631 [2024-12-09 10:09:10.553000] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:58:03.631 [2024-12-09 10:09:10.553959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:58:03.631 [2024-12-09 10:09:10.554006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.631 [2024-12-09 10:09:10.554021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:03.631 [2024-12-09 10:09:10.554036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:58:03.631 [2024-12-09 10:09:10.554048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.631 [2024-12-09 10:09:10.556228] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:58:03.631 [2024-12-09 10:09:10.573023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.631 [2024-12-09 10:09:10.573273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:58:03.632 [2024-12-09 10:09:10.573304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.801 ms 00:58:03.632 [2024-12-09 10:09:10.573325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.573446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.573470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:58:03.632 [2024-12-09 10:09:10.573484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:58:03.632 [2024-12-09 10:09:10.573498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.582770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.583049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:03.632 [2024-12-09 10:09:10.583078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.192 ms 00:58:03.632 [2024-12-09 10:09:10.583094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.583310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.583341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:03.632 [2024-12-09 10:09:10.583356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:58:03.632 [2024-12-09 10:09:10.583397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.583460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.583487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:58:03.632 [2024-12-09 10:09:10.583501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:58:03.632 [2024-12-09 10:09:10.583516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.583568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:58:03.632 [2024-12-09 10:09:10.588769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.588804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:03.632 [2024-12-09 10:09:10.588838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.223 ms 00:58:03.632 [2024-12-09 10:09:10.588849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.588917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.588934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:58:03.632 [2024-12-09 10:09:10.588948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:58:03.632 [2024-12-09 10:09:10.588961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.589008] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:58:03.632 [2024-12-09 10:09:10.589036] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:58:03.632 [2024-12-09 10:09:10.589096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:58:03.632 [2024-12-09 10:09:10.589120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:58:03.632 [2024-12-09 10:09:10.589229] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:58:03.632 [2024-12-09 10:09:10.589245] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:58:03.632 [2024-12-09 10:09:10.589314] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:58:03.632 [2024-12-09 10:09:10.589331] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:58:03.632 [2024-12-09 10:09:10.589365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:58:03.632 [2024-12-09 10:09:10.589378] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:58:03.632 [2024-12-09 10:09:10.589393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:58:03.632 [2024-12-09 10:09:10.589405] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:58:03.632 [2024-12-09 10:09:10.589425] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:58:03.632 [2024-12-09 10:09:10.589438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.589454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:58:03.632 [2024-12-09 10:09:10.589466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:58:03.632 [2024-12-09 10:09:10.589481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.589578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.632 [2024-12-09 10:09:10.589599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:58:03.632 [2024-12-09 10:09:10.589613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:58:03.632 [2024-12-09 10:09:10.589628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.632 [2024-12-09 10:09:10.589746] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:58:03.632 [2024-12-09 10:09:10.589769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:58:03.632 [2024-12-09 10:09:10.589781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:03.632 [2024-12-09 10:09:10.589797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.589809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:58:03.632 [2024-12-09 10:09:10.589827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.589863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:58:03.632 [2024-12-09 10:09:10.589883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:58:03.632 [2024-12-09 10:09:10.589894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:58:03.632 [2024-12-09 10:09:10.589908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:03.632 [2024-12-09 10:09:10.589919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:58:03.632 [2024-12-09 10:09:10.589932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:58:03.632 [2024-12-09 10:09:10.589943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:03.632 [2024-12-09 10:09:10.589957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:58:03.632 [2024-12-09 10:09:10.589968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:58:03.632 [2024-12-09 10:09:10.589981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.589993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:58:03.632 [2024-12-09 10:09:10.590007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:58:03.632 [2024-12-09 10:09:10.590029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.590044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:58:03.632 [2024-12-09 10:09:10.590055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:58:03.632 [2024-12-09 10:09:10.590068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:03.632 [2024-12-09 10:09:10.590079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:58:03.632 [2024-12-09 10:09:10.590095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:58:03.632 [2024-12-09 10:09:10.590107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:03.632 [2024-12-09 10:09:10.590120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:58:03.632 [2024-12-09 10:09:10.590134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:58:03.632 [2024-12-09 10:09:10.590147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:03.632 [2024-12-09 10:09:10.590158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:58:03.632 [2024-12-09 10:09:10.590188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:58:03.632 [2024-12-09 10:09:10.590199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:03.632 [2024-12-09 10:09:10.590211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:58:03.632 [2024-12-09 10:09:10.590222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:58:03.632 [2024-12-09 10:09:10.590239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:03.632 [2024-12-09 10:09:10.590525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:58:03.632 [2024-12-09 10:09:10.590593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:58:03.632 [2024-12-09 10:09:10.590646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:03.632 [2024-12-09 10:09:10.590693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:58:03.632 [2024-12-09 10:09:10.590938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:58:03.632 [2024-12-09 10:09:10.591024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.591210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:58:03.632 [2024-12-09 10:09:10.591286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:58:03.632 [2024-12-09 10:09:10.591334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.591468] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:58:03.632 [2024-12-09 10:09:10.591525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:58:03.632 [2024-12-09 10:09:10.591574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:03.632 [2024-12-09 10:09:10.591682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:03.632 [2024-12-09 10:09:10.591785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:58:03.632 [2024-12-09 10:09:10.591839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:58:03.632 [2024-12-09 10:09:10.591887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:58:03.632 [2024-12-09 10:09:10.592049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:58:03.632 [2024-12-09 10:09:10.592109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:58:03.632 [2024-12-09 10:09:10.592155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:58:03.632 [2024-12-09 10:09:10.592317] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:58:03.632 [2024-12-09 10:09:10.592406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:03.632 [2024-12-09 10:09:10.592473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:58:03.632 [2024-12-09 10:09:10.592489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:58:03.632 [2024-12-09 10:09:10.592508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:58:03.633 [2024-12-09 10:09:10.592520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:58:03.633 [2024-12-09 10:09:10.592538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:58:03.633 [2024-12-09 10:09:10.592550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:58:03.633 [2024-12-09 10:09:10.592591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:58:03.633 [2024-12-09 10:09:10.592630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:58:03.633 [2024-12-09 10:09:10.592646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:58:03.633 [2024-12-09 10:09:10.592658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:58:03.633 [2024-12-09 10:09:10.592674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:58:03.633 [2024-12-09 10:09:10.592687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:58:03.633 [2024-12-09 10:09:10.592703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:58:03.633 [2024-12-09 10:09:10.592716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:58:03.633 [2024-12-09 10:09:10.592732] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:58:03.633 [2024-12-09 10:09:10.592746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:03.633 [2024-12-09 10:09:10.592768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:03.633 [2024-12-09 10:09:10.592780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:58:03.633 [2024-12-09 10:09:10.592797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:58:03.633 [2024-12-09 10:09:10.592809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:58:03.633 [2024-12-09 10:09:10.592828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.633 [2024-12-09 10:09:10.592841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:58:03.633 [2024-12-09 10:09:10.592859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.135 ms 00:58:03.633 [2024-12-09 10:09:10.592877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.633 [2024-12-09 10:09:10.635559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.633 [2024-12-09 10:09:10.635659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:03.633 [2024-12-09 10:09:10.635715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.548 ms 00:58:03.633 [2024-12-09 10:09:10.635732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.633 [2024-12-09 10:09:10.635961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.633 [2024-12-09 10:09:10.635980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:58:03.633 [2024-12-09 10:09:10.635997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:58:03.633 [2024-12-09 10:09:10.636023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.689232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.689348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:03.892 [2024-12-09 10:09:10.689374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.172 ms 00:58:03.892 [2024-12-09 10:09:10.689387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.689556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.689575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:03.892 [2024-12-09 10:09:10.689591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:03.892 [2024-12-09 10:09:10.689604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.690216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.690262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:03.892 [2024-12-09 10:09:10.690283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:58:03.892 [2024-12-09 10:09:10.690296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.690484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.690502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:03.892 [2024-12-09 10:09:10.690522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:58:03.892 [2024-12-09 10:09:10.690534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.715952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.716055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:03.892 [2024-12-09 10:09:10.716100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.378 ms 00:58:03.892 [2024-12-09 10:09:10.716114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.752610] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:58:03.892 [2024-12-09 10:09:10.752697] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:58:03.892 [2024-12-09 10:09:10.752731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.752747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:58:03.892 [2024-12-09 10:09:10.752770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.365 ms 00:58:03.892 [2024-12-09 10:09:10.752798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.783794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.783843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:58:03.892 [2024-12-09 10:09:10.783911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.871 ms 00:58:03.892 [2024-12-09 10:09:10.783924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.800842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.800888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:58:03.892 [2024-12-09 10:09:10.800919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.731 ms 00:58:03.892 [2024-12-09 10:09:10.800932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.816433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.816501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:58:03.892 [2024-12-09 10:09:10.816524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.368 ms 00:58:03.892 [2024-12-09 10:09:10.816543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.817525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.817679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:58:03.892 [2024-12-09 10:09:10.817719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:58:03.892 [2024-12-09 10:09:10.817734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.901390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:03.892 [2024-12-09 10:09:10.901501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:58:03.892 [2024-12-09 10:09:10.901533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.607 ms 00:58:03.892 [2024-12-09 10:09:10.901551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:03.892 [2024-12-09 10:09:10.915415] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:58:04.151 [2024-12-09 10:09:10.938624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.938720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:58:04.151 [2024-12-09 10:09:10.938747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.824 ms 00:58:04.151 [2024-12-09 10:09:10.938763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.938912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.938947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:58:04.151 [2024-12-09 10:09:10.938962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:58:04.151 [2024-12-09 10:09:10.938976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.939053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.939072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:58:04.151 [2024-12-09 10:09:10.939086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:58:04.151 [2024-12-09 10:09:10.939104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.939138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.939155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:58:04.151 [2024-12-09 10:09:10.939168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:58:04.151 [2024-12-09 10:09:10.939183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.939231] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:58:04.151 [2024-12-09 10:09:10.939283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.939303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:58:04.151 [2024-12-09 10:09:10.939319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:58:04.151 [2024-12-09 10:09:10.939331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.971536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.971601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:58:04.151 [2024-12-09 10:09:10.971630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.153 ms 00:58:04.151 [2024-12-09 10:09:10.971644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.971810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.151 [2024-12-09 10:09:10.971833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:58:04.151 [2024-12-09 10:09:10.971854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:58:04.151 [2024-12-09 10:09:10.971873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.151 [2024-12-09 10:09:10.973198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:04.152 [2024-12-09 10:09:10.977560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 424.856 ms, result 0 00:58:04.152 [2024-12-09 10:09:10.978808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:04.152 Some configs were skipped because the RPC state that can call them passed over. 00:58:04.152 10:09:11 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:58:04.410 [2024-12-09 10:09:11.333383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.410 [2024-12-09 10:09:11.333694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:58:04.410 [2024-12-09 10:09:11.333820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:58:04.410 [2024-12-09 10:09:11.333910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.410 [2024-12-09 10:09:11.334081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.386 ms, result 0 00:58:04.410 true 00:58:04.410 10:09:11 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:58:04.668 [2024-12-09 10:09:11.613306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:04.668 [2024-12-09 10:09:11.613364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:58:04.668 [2024-12-09 10:09:11.613403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:58:04.668 [2024-12-09 10:09:11.613416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:04.668 [2024-12-09 10:09:11.613469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.388 ms, result 0 00:58:04.668 true 00:58:04.668 10:09:11 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78911 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78911 ']' 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78911 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78911 00:58:04.668 killing process with pid 78911 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78911' 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78911 00:58:04.668 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78911 00:58:06.045 [2024-12-09 10:09:12.661845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.661959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:06.045 [2024-12-09 10:09:12.661980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:06.045 [2024-12-09 10:09:12.661995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.662030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:58:06.045 [2024-12-09 10:09:12.665634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.665679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:06.045 [2024-12-09 10:09:12.665714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:58:06.045 [2024-12-09 10:09:12.665724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.666041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.666061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:06.045 [2024-12-09 10:09:12.666075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:58:06.045 [2024-12-09 10:09:12.666086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.670061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.670103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:06.045 [2024-12-09 10:09:12.670125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.947 ms 00:58:06.045 [2024-12-09 10:09:12.670137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.676714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.676910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:06.045 [2024-12-09 10:09:12.676959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.515 ms 00:58:06.045 [2024-12-09 10:09:12.676972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.688130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.688368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:06.045 [2024-12-09 10:09:12.688435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.091 ms 00:58:06.045 [2024-12-09 10:09:12.688448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.697161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.697204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:06.045 [2024-12-09 10:09:12.697238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.658 ms 00:58:06.045 [2024-12-09 10:09:12.697249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.697434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.697453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:06.045 [2024-12-09 10:09:12.697466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:58:06.045 [2024-12-09 10:09:12.697477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.709697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.709733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:06.045 [2024-12-09 10:09:12.709769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.188 ms 00:58:06.045 [2024-12-09 10:09:12.709780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.721579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.721616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:06.045 [2024-12-09 10:09:12.721662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.751 ms 00:58:06.045 [2024-12-09 10:09:12.721688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.733805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.733867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:06.045 [2024-12-09 10:09:12.733897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.067 ms 00:58:06.045 [2024-12-09 10:09:12.733910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.746702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.045 [2024-12-09 10:09:12.746758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:06.045 [2024-12-09 10:09:12.746797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:58:06.045 [2024-12-09 10:09:12.746825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.045 [2024-12-09 10:09:12.746872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:06.045 [2024-12-09 10:09:12.746896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.746913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.746925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.746955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.746968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.746986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.746999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:06.045 [2024-12-09 10:09:12.747569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.747990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.748854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:06.046 [2024-12-09 10:09:12.749556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:06.046 [2024-12-09 10:09:12.749598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:58:06.046 [2024-12-09 10:09:12.749618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:06.046 [2024-12-09 10:09:12.749634] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:06.046 [2024-12-09 10:09:12.749645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:06.046 [2024-12-09 10:09:12.749662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:06.046 [2024-12-09 10:09:12.749673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:06.046 [2024-12-09 10:09:12.749691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:06.046 [2024-12-09 10:09:12.749702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:06.046 [2024-12-09 10:09:12.749717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:06.046 [2024-12-09 10:09:12.749728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:06.046 [2024-12-09 10:09:12.749745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.046 [2024-12-09 10:09:12.749769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:06.046 [2024-12-09 10:09:12.749788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:58:06.046 [2024-12-09 10:09:12.749800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.046 [2024-12-09 10:09:12.768014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.046 [2024-12-09 10:09:12.768197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:06.046 [2024-12-09 10:09:12.768242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.091 ms 00:58:06.046 [2024-12-09 10:09:12.768283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.046 [2024-12-09 10:09:12.768864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:06.046 [2024-12-09 10:09:12.768906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:06.046 [2024-12-09 10:09:12.768933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:58:06.046 [2024-12-09 10:09:12.768946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.046 [2024-12-09 10:09:12.831946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.046 [2024-12-09 10:09:12.832024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:06.046 [2024-12-09 10:09:12.832052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.046 [2024-12-09 10:09:12.832066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.046 [2024-12-09 10:09:12.832224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.046 [2024-12-09 10:09:12.832244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:06.046 [2024-12-09 10:09:12.832291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.046 [2024-12-09 10:09:12.832305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.046 [2024-12-09 10:09:12.832384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.046 [2024-12-09 10:09:12.832403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:06.046 [2024-12-09 10:09:12.832427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.046 [2024-12-09 10:09:12.832440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.046 [2024-12-09 10:09:12.832474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.046 [2024-12-09 10:09:12.832490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:06.046 [2024-12-09 10:09:12.832507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:12.832525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:12.945528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:12.945605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:06.047 [2024-12-09 10:09:12.945644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:12.945657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.029573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.029676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:06.047 [2024-12-09 10:09:13.029719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.029739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.029869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.029890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:06.047 [2024-12-09 10:09:13.029914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.029927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.029975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.029991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:06.047 [2024-12-09 10:09:13.030009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.030022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.030167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.030188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:06.047 [2024-12-09 10:09:13.030207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.030220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.030314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.030334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:06.047 [2024-12-09 10:09:13.030358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.030371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.030435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.030452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:06.047 [2024-12-09 10:09:13.030475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.030488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.030552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:06.047 [2024-12-09 10:09:13.030617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:06.047 [2024-12-09 10:09:13.030648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:06.047 [2024-12-09 10:09:13.030662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:06.047 [2024-12-09 10:09:13.030859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.982 ms, result 0 00:58:06.985 10:09:13 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:58:06.985 10:09:13 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:58:07.244 [2024-12-09 10:09:14.090127] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:07.244 [2024-12-09 10:09:14.090358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78976 ] 00:58:07.244 [2024-12-09 10:09:14.277294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:07.502 [2024-12-09 10:09:14.403401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:07.761 [2024-12-09 10:09:14.744979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:07.761 [2024-12-09 10:09:14.745445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:08.020 [2024-12-09 10:09:14.911137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.020 [2024-12-09 10:09:14.911435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:58:08.020 [2024-12-09 10:09:14.911468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:58:08.020 [2024-12-09 10:09:14.911482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.020 [2024-12-09 10:09:14.915512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.020 [2024-12-09 10:09:14.915598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:08.020 [2024-12-09 10:09:14.915646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.991 ms 00:58:08.020 [2024-12-09 10:09:14.915759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.020 [2024-12-09 10:09:14.916013] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:58:08.020 [2024-12-09 10:09:14.917458] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:58:08.020 [2024-12-09 10:09:14.917640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.020 [2024-12-09 10:09:14.917749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:08.020 [2024-12-09 10:09:14.917798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:58:08.021 [2024-12-09 10:09:14.917986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.920387] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:58:08.021 [2024-12-09 10:09:14.938833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.939223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:58:08.021 [2024-12-09 10:09:14.939373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.444 ms 00:58:08.021 [2024-12-09 10:09:14.939423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.939718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.939869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:58:08.021 [2024-12-09 10:09:14.939992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:58:08.021 [2024-12-09 10:09:14.940039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.950235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.950461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:08.021 [2024-12-09 10:09:14.950583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.966 ms 00:58:08.021 [2024-12-09 10:09:14.950630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.950828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.950884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:08.021 [2024-12-09 10:09:14.950922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:58:08.021 [2024-12-09 10:09:14.951020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.951110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.951222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:58:08.021 [2024-12-09 10:09:14.951297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:58:08.021 [2024-12-09 10:09:14.951393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.951534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:58:08.021 [2024-12-09 10:09:14.956781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.956946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:08.021 [2024-12-09 10:09:14.957048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.259 ms 00:58:08.021 [2024-12-09 10:09:14.957094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.957194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.957244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:58:08.021 [2024-12-09 10:09:14.957430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:58:08.021 [2024-12-09 10:09:14.957510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.957595] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:58:08.021 [2024-12-09 10:09:14.957657] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:58:08.021 [2024-12-09 10:09:14.957744] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:58:08.021 [2024-12-09 10:09:14.957981] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:58:08.021 [2024-12-09 10:09:14.958135] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:58:08.021 [2024-12-09 10:09:14.958208] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:58:08.021 [2024-12-09 10:09:14.958421] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:58:08.021 [2024-12-09 10:09:14.958452] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:58:08.021 [2024-12-09 10:09:14.958466] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:58:08.021 [2024-12-09 10:09:14.958479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:58:08.021 [2024-12-09 10:09:14.958491] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:58:08.021 [2024-12-09 10:09:14.958502] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:58:08.021 [2024-12-09 10:09:14.958513] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:58:08.021 [2024-12-09 10:09:14.958527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.958539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:58:08.021 [2024-12-09 10:09:14.958551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:58:08.021 [2024-12-09 10:09:14.958562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.958669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.021 [2024-12-09 10:09:14.958691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:58:08.021 [2024-12-09 10:09:14.958703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:58:08.021 [2024-12-09 10:09:14.958714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.021 [2024-12-09 10:09:14.958835] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:58:08.021 [2024-12-09 10:09:14.958854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:58:08.021 [2024-12-09 10:09:14.958867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:08.021 [2024-12-09 10:09:14.958879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.958915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:58:08.021 [2024-12-09 10:09:14.958940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.958950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:58:08.021 [2024-12-09 10:09:14.958961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:58:08.021 [2024-12-09 10:09:14.958971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:58:08.021 [2024-12-09 10:09:14.958981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:08.021 [2024-12-09 10:09:14.958991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:58:08.021 [2024-12-09 10:09:14.959014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:58:08.021 [2024-12-09 10:09:14.959024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:08.021 [2024-12-09 10:09:14.959034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:58:08.021 [2024-12-09 10:09:14.959045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:58:08.021 [2024-12-09 10:09:14.959055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:58:08.021 [2024-12-09 10:09:14.959092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:58:08.021 [2024-12-09 10:09:14.959122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:58:08.021 [2024-12-09 10:09:14.959153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:58:08.021 [2024-12-09 10:09:14.959181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:58:08.021 [2024-12-09 10:09:14.959210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:58:08.021 [2024-12-09 10:09:14.959238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:08.021 [2024-12-09 10:09:14.959273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:58:08.021 [2024-12-09 10:09:14.959283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:58:08.021 [2024-12-09 10:09:14.959293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:08.021 [2024-12-09 10:09:14.959302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:58:08.021 [2024-12-09 10:09:14.959313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:58:08.021 [2024-12-09 10:09:14.959341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:58:08.021 [2024-12-09 10:09:14.959392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:58:08.021 [2024-12-09 10:09:14.959403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959413] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:58:08.021 [2024-12-09 10:09:14.959424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:58:08.021 [2024-12-09 10:09:14.959441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:08.021 [2024-12-09 10:09:14.959463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:58:08.021 [2024-12-09 10:09:14.959473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:58:08.021 [2024-12-09 10:09:14.959484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:58:08.021 [2024-12-09 10:09:14.959495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:58:08.021 [2024-12-09 10:09:14.959505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:58:08.021 [2024-12-09 10:09:14.959515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:58:08.021 [2024-12-09 10:09:14.959543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:58:08.022 [2024-12-09 10:09:14.959557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:58:08.022 [2024-12-09 10:09:14.959581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:58:08.022 [2024-12-09 10:09:14.959592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:58:08.022 [2024-12-09 10:09:14.959603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:58:08.022 [2024-12-09 10:09:14.959614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:58:08.022 [2024-12-09 10:09:14.959625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:58:08.022 [2024-12-09 10:09:14.959636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:58:08.022 [2024-12-09 10:09:14.959647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:58:08.022 [2024-12-09 10:09:14.959658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:58:08.022 [2024-12-09 10:09:14.959669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:58:08.022 [2024-12-09 10:09:14.959725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:58:08.022 [2024-12-09 10:09:14.959737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:08.022 [2024-12-09 10:09:14.959761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:58:08.022 [2024-12-09 10:09:14.959773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:58:08.022 [2024-12-09 10:09:14.959784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:58:08.022 [2024-12-09 10:09:14.959797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:14.959813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:58:08.022 [2024-12-09 10:09:14.959825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:58:08.022 [2024-12-09 10:09:14.959835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.022 [2024-12-09 10:09:14.999774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:14.999837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:08.022 [2024-12-09 10:09:14.999872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.855 ms 00:58:08.022 [2024-12-09 10:09:14.999884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.022 [2024-12-09 10:09:15.000069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:15.000088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:58:08.022 [2024-12-09 10:09:15.000101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:58:08.022 [2024-12-09 10:09:15.000111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.022 [2024-12-09 10:09:15.054146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:15.054478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:08.022 [2024-12-09 10:09:15.054517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.003 ms 00:58:08.022 [2024-12-09 10:09:15.054531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.022 [2024-12-09 10:09:15.054692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:15.054713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:08.022 [2024-12-09 10:09:15.054726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:08.022 [2024-12-09 10:09:15.054738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.022 [2024-12-09 10:09:15.055352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:15.055376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:08.022 [2024-12-09 10:09:15.055396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:58:08.022 [2024-12-09 10:09:15.055422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.022 [2024-12-09 10:09:15.055604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.022 [2024-12-09 10:09:15.055629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:08.022 [2024-12-09 10:09:15.055643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:58:08.022 [2024-12-09 10:09:15.055653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.074695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.074735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:08.281 [2024-12-09 10:09:15.074766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.012 ms 00:58:08.281 [2024-12-09 10:09:15.074777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.090378] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:58:08.281 [2024-12-09 10:09:15.090419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:58:08.281 [2024-12-09 10:09:15.090452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.090463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:58:08.281 [2024-12-09 10:09:15.090475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.555 ms 00:58:08.281 [2024-12-09 10:09:15.090485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.116984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.117042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:58:08.281 [2024-12-09 10:09:15.117074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.411 ms 00:58:08.281 [2024-12-09 10:09:15.117085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.131829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.131869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:58:08.281 [2024-12-09 10:09:15.131900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.654 ms 00:58:08.281 [2024-12-09 10:09:15.131910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.146480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.146519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:58:08.281 [2024-12-09 10:09:15.146550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.466 ms 00:58:08.281 [2024-12-09 10:09:15.146560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.147387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.147426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:58:08.281 [2024-12-09 10:09:15.147440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:58:08.281 [2024-12-09 10:09:15.147452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.225414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.225479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:58:08.281 [2024-12-09 10:09:15.225515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.912 ms 00:58:08.281 [2024-12-09 10:09:15.225528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.237372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:58:08.281 [2024-12-09 10:09:15.257138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.257204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:58:08.281 [2024-12-09 10:09:15.257240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.396 ms 00:58:08.281 [2024-12-09 10:09:15.257259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.257473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.257494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:58:08.281 [2024-12-09 10:09:15.257508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:58:08.281 [2024-12-09 10:09:15.257519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.257593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.257610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:58:08.281 [2024-12-09 10:09:15.257622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:58:08.281 [2024-12-09 10:09:15.257638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.257685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.257719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:58:08.281 [2024-12-09 10:09:15.257731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:58:08.281 [2024-12-09 10:09:15.257772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.257822] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:58:08.281 [2024-12-09 10:09:15.257866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.257879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:58:08.281 [2024-12-09 10:09:15.257892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:58:08.281 [2024-12-09 10:09:15.257903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.286499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.286541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:58:08.281 [2024-12-09 10:09:15.286573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.563 ms 00:58:08.281 [2024-12-09 10:09:15.286584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.286704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:08.281 [2024-12-09 10:09:15.286723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:58:08.281 [2024-12-09 10:09:15.286735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:58:08.281 [2024-12-09 10:09:15.286746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:08.281 [2024-12-09 10:09:15.287948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:08.281 [2024-12-09 10:09:15.291859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.486 ms, result 0 00:58:08.281 [2024-12-09 10:09:15.292764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:08.281 [2024-12-09 10:09:15.307769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:09.656  [2024-12-09T10:09:17.635Z] Copying: 25/256 [MB] (25 MBps) [2024-12-09T10:09:18.570Z] Copying: 48/256 [MB] (22 MBps) [2024-12-09T10:09:19.506Z] Copying: 70/256 [MB] (21 MBps) [2024-12-09T10:09:20.441Z] Copying: 94/256 [MB] (24 MBps) [2024-12-09T10:09:21.378Z] Copying: 118/256 [MB] (24 MBps) [2024-12-09T10:09:22.314Z] Copying: 142/256 [MB] (23 MBps) [2024-12-09T10:09:23.692Z] Copying: 166/256 [MB] (24 MBps) [2024-12-09T10:09:24.682Z] Copying: 191/256 [MB] (24 MBps) [2024-12-09T10:09:25.636Z] Copying: 216/256 [MB] (24 MBps) [2024-12-09T10:09:26.203Z] Copying: 240/256 [MB] (24 MBps) [2024-12-09T10:09:26.203Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-09 10:09:25.951633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:19.159 [2024-12-09 10:09:25.967668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.159 [2024-12-09 10:09:25.967733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:19.159 [2024-12-09 10:09:25.967780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:58:19.159 [2024-12-09 10:09:25.967794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.159 [2024-12-09 10:09:25.967834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:58:19.159 [2024-12-09 10:09:25.972444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.159 [2024-12-09 10:09:25.972486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:19.159 [2024-12-09 10:09:25.972513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:58:19.159 [2024-12-09 10:09:25.972527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:25.972928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:25.972952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:19.160 [2024-12-09 10:09:25.972967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:58:19.160 [2024-12-09 10:09:25.972981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:25.977683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:25.977722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:19.160 [2024-12-09 10:09:25.977744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:58:19.160 [2024-12-09 10:09:25.977757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:25.987548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:25.987805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:19.160 [2024-12-09 10:09:25.987838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.736 ms 00:58:19.160 [2024-12-09 10:09:25.987854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.026868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.026927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:19.160 [2024-12-09 10:09:26.026955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.880 ms 00:58:19.160 [2024-12-09 10:09:26.026969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.049321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.049589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:19.160 [2024-12-09 10:09:26.049640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.278 ms 00:58:19.160 [2024-12-09 10:09:26.049656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.049882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.049908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:19.160 [2024-12-09 10:09:26.049947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:58:19.160 [2024-12-09 10:09:26.049961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.089579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.089662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:19.160 [2024-12-09 10:09:26.089692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.587 ms 00:58:19.160 [2024-12-09 10:09:26.089716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.128210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.128282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:19.160 [2024-12-09 10:09:26.128303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.397 ms 00:58:19.160 [2024-12-09 10:09:26.128317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.166905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.166956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:19.160 [2024-12-09 10:09:26.166994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.511 ms 00:58:19.160 [2024-12-09 10:09:26.167008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.199929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.160 [2024-12-09 10:09:26.200012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:19.160 [2024-12-09 10:09:26.200028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.812 ms 00:58:19.160 [2024-12-09 10:09:26.200055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.160 [2024-12-09 10:09:26.200134] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:19.160 [2024-12-09 10:09:26.200169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:19.160 [2024-12-09 10:09:26.200855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.200985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:19.161 [2024-12-09 10:09:26.201430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:19.161 [2024-12-09 10:09:26.201442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:58:19.161 [2024-12-09 10:09:26.201454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:19.161 [2024-12-09 10:09:26.201465] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:19.161 [2024-12-09 10:09:26.201476] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:19.161 [2024-12-09 10:09:26.201487] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:19.161 [2024-12-09 10:09:26.201498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:19.161 [2024-12-09 10:09:26.201509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:19.161 [2024-12-09 10:09:26.201530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:19.161 [2024-12-09 10:09:26.201541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:19.161 [2024-12-09 10:09:26.201551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:19.161 [2024-12-09 10:09:26.201562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.161 [2024-12-09 10:09:26.201574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:19.161 [2024-12-09 10:09:26.201586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:58:19.161 [2024-12-09 10:09:26.201601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.220512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.420 [2024-12-09 10:09:26.220568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:19.420 [2024-12-09 10:09:26.220600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.882 ms 00:58:19.420 [2024-12-09 10:09:26.220627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.221203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:19.420 [2024-12-09 10:09:26.221241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:19.420 [2024-12-09 10:09:26.221275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:58:19.420 [2024-12-09 10:09:26.221288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.278781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.420 [2024-12-09 10:09:26.279047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:19.420 [2024-12-09 10:09:26.279075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.420 [2024-12-09 10:09:26.279104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.279210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.420 [2024-12-09 10:09:26.279228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:19.420 [2024-12-09 10:09:26.279240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.420 [2024-12-09 10:09:26.279278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.279349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.420 [2024-12-09 10:09:26.279368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:19.420 [2024-12-09 10:09:26.279381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.420 [2024-12-09 10:09:26.279393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.279435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.420 [2024-12-09 10:09:26.279450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:19.420 [2024-12-09 10:09:26.279462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.420 [2024-12-09 10:09:26.279474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.420 [2024-12-09 10:09:26.400417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.421 [2024-12-09 10:09:26.400501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:19.421 [2024-12-09 10:09:26.400521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.421 [2024-12-09 10:09:26.400533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.495154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.495213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:19.679 [2024-12-09 10:09:26.495232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.495245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.495368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.495387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:19.679 [2024-12-09 10:09:26.495400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.495412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.495450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.495481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:19.679 [2024-12-09 10:09:26.495493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.495505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.495638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.495708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:19.679 [2024-12-09 10:09:26.495727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.495740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.495798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.495816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:19.679 [2024-12-09 10:09:26.495845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.495857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.495908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.495925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:19.679 [2024-12-09 10:09:26.495937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.495948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.496002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:19.679 [2024-12-09 10:09:26.496031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:19.679 [2024-12-09 10:09:26.496044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:19.679 [2024-12-09 10:09:26.496056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:19.679 [2024-12-09 10:09:26.496246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.601 ms, result 0 00:58:20.614 00:58:20.614 00:58:20.614 10:09:27 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:58:20.614 10:09:27 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:58:21.550 10:09:28 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:58:21.550 [2024-12-09 10:09:28.405769] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:21.550 [2024-12-09 10:09:28.406077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79125 ] 00:58:21.809 [2024-12-09 10:09:28.600678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:21.809 [2024-12-09 10:09:28.747539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:22.377 [2024-12-09 10:09:29.153247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:22.378 [2024-12-09 10:09:29.153377] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:22.378 [2024-12-09 10:09:29.318232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.318317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:58:22.378 [2024-12-09 10:09:29.318339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:58:22.378 [2024-12-09 10:09:29.318352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.321871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.321915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:22.378 [2024-12-09 10:09:29.321933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.481 ms 00:58:22.378 [2024-12-09 10:09:29.321956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.322134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:58:22.378 [2024-12-09 10:09:29.323115] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:58:22.378 [2024-12-09 10:09:29.323159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.323174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:22.378 [2024-12-09 10:09:29.323187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:58:22.378 [2024-12-09 10:09:29.323198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.325421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:58:22.378 [2024-12-09 10:09:29.342723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.342774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:58:22.378 [2024-12-09 10:09:29.342794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.305 ms 00:58:22.378 [2024-12-09 10:09:29.342806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.342937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.342959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:58:22.378 [2024-12-09 10:09:29.342973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:58:22.378 [2024-12-09 10:09:29.342984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.351981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.352049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:22.378 [2024-12-09 10:09:29.352081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.933 ms 00:58:22.378 [2024-12-09 10:09:29.352108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.352269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.352321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:22.378 [2024-12-09 10:09:29.352336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:58:22.378 [2024-12-09 10:09:29.352348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.352399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.352416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:58:22.378 [2024-12-09 10:09:29.352429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:58:22.378 [2024-12-09 10:09:29.352440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.352473] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:58:22.378 [2024-12-09 10:09:29.357872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.357913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:22.378 [2024-12-09 10:09:29.357929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.409 ms 00:58:22.378 [2024-12-09 10:09:29.357941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.358044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.358063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:58:22.378 [2024-12-09 10:09:29.358076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:58:22.378 [2024-12-09 10:09:29.358087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.358125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:58:22.378 [2024-12-09 10:09:29.358156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:58:22.378 [2024-12-09 10:09:29.358222] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:58:22.378 [2024-12-09 10:09:29.358242] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:58:22.378 [2024-12-09 10:09:29.358391] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:58:22.378 [2024-12-09 10:09:29.358421] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:58:22.378 [2024-12-09 10:09:29.358437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:58:22.378 [2024-12-09 10:09:29.358458] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:58:22.378 [2024-12-09 10:09:29.358471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:58:22.378 [2024-12-09 10:09:29.358483] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:58:22.378 [2024-12-09 10:09:29.358495] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:58:22.378 [2024-12-09 10:09:29.358506] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:58:22.378 [2024-12-09 10:09:29.358517] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:58:22.378 [2024-12-09 10:09:29.358529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.358541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:58:22.378 [2024-12-09 10:09:29.358553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:58:22.378 [2024-12-09 10:09:29.358564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.358665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.378 [2024-12-09 10:09:29.358838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:58:22.378 [2024-12-09 10:09:29.358864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:58:22.378 [2024-12-09 10:09:29.358878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.378 [2024-12-09 10:09:29.359010] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:58:22.378 [2024-12-09 10:09:29.359030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:58:22.378 [2024-12-09 10:09:29.359044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:58:22.378 [2024-12-09 10:09:29.359084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:58:22.378 [2024-12-09 10:09:29.359115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:22.378 [2024-12-09 10:09:29.359135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:58:22.378 [2024-12-09 10:09:29.359159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:58:22.378 [2024-12-09 10:09:29.359169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:22.378 [2024-12-09 10:09:29.359180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:58:22.378 [2024-12-09 10:09:29.359192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:58:22.378 [2024-12-09 10:09:29.359203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:58:22.378 [2024-12-09 10:09:29.359224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:58:22.378 [2024-12-09 10:09:29.359274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:58:22.378 [2024-12-09 10:09:29.359308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:58:22.378 [2024-12-09 10:09:29.359339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:58:22.378 [2024-12-09 10:09:29.359370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:22.378 [2024-12-09 10:09:29.359391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:58:22.378 [2024-12-09 10:09:29.359402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:58:22.378 [2024-12-09 10:09:29.359412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:22.378 [2024-12-09 10:09:29.359424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:58:22.378 [2024-12-09 10:09:29.359435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:58:22.378 [2024-12-09 10:09:29.359446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:22.378 [2024-12-09 10:09:29.359457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:58:22.378 [2024-12-09 10:09:29.359468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:58:22.378 [2024-12-09 10:09:29.359478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:22.379 [2024-12-09 10:09:29.359488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:58:22.379 [2024-12-09 10:09:29.359499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:58:22.379 [2024-12-09 10:09:29.359509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:22.379 [2024-12-09 10:09:29.359520] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:58:22.379 [2024-12-09 10:09:29.359532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:58:22.379 [2024-12-09 10:09:29.359549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:22.379 [2024-12-09 10:09:29.359561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:22.379 [2024-12-09 10:09:29.359573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:58:22.379 [2024-12-09 10:09:29.359584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:58:22.379 [2024-12-09 10:09:29.359595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:58:22.379 [2024-12-09 10:09:29.359606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:58:22.379 [2024-12-09 10:09:29.359616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:58:22.379 [2024-12-09 10:09:29.359643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:58:22.379 [2024-12-09 10:09:29.359655] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:58:22.379 [2024-12-09 10:09:29.359670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:58:22.379 [2024-12-09 10:09:29.359694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:58:22.379 [2024-12-09 10:09:29.359706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:58:22.379 [2024-12-09 10:09:29.359717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:58:22.379 [2024-12-09 10:09:29.359728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:58:22.379 [2024-12-09 10:09:29.359739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:58:22.379 [2024-12-09 10:09:29.359750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:58:22.379 [2024-12-09 10:09:29.359761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:58:22.379 [2024-12-09 10:09:29.359772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:58:22.379 [2024-12-09 10:09:29.359783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:58:22.379 [2024-12-09 10:09:29.359840] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:58:22.379 [2024-12-09 10:09:29.359853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:22.379 [2024-12-09 10:09:29.359893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:58:22.379 [2024-12-09 10:09:29.359904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:58:22.379 [2024-12-09 10:09:29.359916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:58:22.379 [2024-12-09 10:09:29.359930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.379 [2024-12-09 10:09:29.359946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:58:22.379 [2024-12-09 10:09:29.359958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:58:22.379 [2024-12-09 10:09:29.359970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.379 [2024-12-09 10:09:29.401644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.379 [2024-12-09 10:09:29.401710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:22.379 [2024-12-09 10:09:29.401733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.591 ms 00:58:22.379 [2024-12-09 10:09:29.401745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.379 [2024-12-09 10:09:29.401985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.379 [2024-12-09 10:09:29.402007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:58:22.379 [2024-12-09 10:09:29.402020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:58:22.379 [2024-12-09 10:09:29.402032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.463419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.463486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:22.639 [2024-12-09 10:09:29.463514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.353 ms 00:58:22.639 [2024-12-09 10:09:29.463526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.463713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.463734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:22.639 [2024-12-09 10:09:29.463747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:22.639 [2024-12-09 10:09:29.463759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.464384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.464404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:22.639 [2024-12-09 10:09:29.464426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:58:22.639 [2024-12-09 10:09:29.464437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.464619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.464639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:22.639 [2024-12-09 10:09:29.464652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:58:22.639 [2024-12-09 10:09:29.464677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.485924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.485999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:22.639 [2024-12-09 10:09:29.486039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.199 ms 00:58:22.639 [2024-12-09 10:09:29.486054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.502089] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:58:22.639 [2024-12-09 10:09:29.502141] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:58:22.639 [2024-12-09 10:09:29.502188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.502214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:58:22.639 [2024-12-09 10:09:29.502227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.881 ms 00:58:22.639 [2024-12-09 10:09:29.502238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.531841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.531898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:58:22.639 [2024-12-09 10:09:29.531933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.436 ms 00:58:22.639 [2024-12-09 10:09:29.531961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.548347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.548406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:58:22.639 [2024-12-09 10:09:29.548424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.234 ms 00:58:22.639 [2024-12-09 10:09:29.548446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.564170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.564226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:58:22.639 [2024-12-09 10:09:29.564243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.627 ms 00:58:22.639 [2024-12-09 10:09:29.564303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.565373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.565408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:58:22.639 [2024-12-09 10:09:29.565440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:58:22.639 [2024-12-09 10:09:29.565452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.643873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.644205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:58:22.639 [2024-12-09 10:09:29.644237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.386 ms 00:58:22.639 [2024-12-09 10:09:29.644306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.656243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:58:22.639 [2024-12-09 10:09:29.676887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.676971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:58:22.639 [2024-12-09 10:09:29.677017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.406 ms 00:58:22.639 [2024-12-09 10:09:29.677036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.677190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.677209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:58:22.639 [2024-12-09 10:09:29.677221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:58:22.639 [2024-12-09 10:09:29.677232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.677363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.677398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:58:22.639 [2024-12-09 10:09:29.677421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:58:22.639 [2024-12-09 10:09:29.677439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.677488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.677506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:58:22.639 [2024-12-09 10:09:29.677518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:58:22.639 [2024-12-09 10:09:29.677529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.639 [2024-12-09 10:09:29.677584] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:58:22.639 [2024-12-09 10:09:29.677601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.639 [2024-12-09 10:09:29.677627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:58:22.639 [2024-12-09 10:09:29.677640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:58:22.639 [2024-12-09 10:09:29.677666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.898 [2024-12-09 10:09:29.706412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.898 [2024-12-09 10:09:29.706452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:58:22.898 [2024-12-09 10:09:29.706483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.713 ms 00:58:22.898 [2024-12-09 10:09:29.706494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.898 [2024-12-09 10:09:29.706628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.898 [2024-12-09 10:09:29.706648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:58:22.898 [2024-12-09 10:09:29.706660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:58:22.898 [2024-12-09 10:09:29.706669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.898 [2024-12-09 10:09:29.707905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:22.898 [2024-12-09 10:09:29.711742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.341 ms, result 0 00:58:22.898 [2024-12-09 10:09:29.712653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:22.898 [2024-12-09 10:09:29.727429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:22.898  [2024-12-09T10:09:29.942Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-12-09 10:09:29.910836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:22.898 [2024-12-09 10:09:29.924964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.898 [2024-12-09 10:09:29.925009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:22.898 [2024-12-09 10:09:29.925038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:22.898 [2024-12-09 10:09:29.925050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.899 [2024-12-09 10:09:29.925082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:58:22.899 [2024-12-09 10:09:29.929239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.899 [2024-12-09 10:09:29.929284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:22.899 [2024-12-09 10:09:29.929301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.136 ms 00:58:22.899 [2024-12-09 10:09:29.929313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.899 [2024-12-09 10:09:29.931429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.899 [2024-12-09 10:09:29.931468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:22.899 [2024-12-09 10:09:29.931499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:58:22.899 [2024-12-09 10:09:29.931510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:22.899 [2024-12-09 10:09:29.935397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:22.899 [2024-12-09 10:09:29.935437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:22.899 [2024-12-09 10:09:29.935453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.844 ms 00:58:22.899 [2024-12-09 10:09:29.935465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.158 [2024-12-09 10:09:29.943765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.158 [2024-12-09 10:09:29.943949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:23.158 [2024-12-09 10:09:29.943990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.259 ms 00:58:23.158 [2024-12-09 10:09:29.944004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.158 [2024-12-09 10:09:29.977985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.158 [2024-12-09 10:09:29.978072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:23.158 [2024-12-09 10:09:29.978092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.906 ms 00:58:23.158 [2024-12-09 10:09:29.978104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.158 [2024-12-09 10:09:29.998239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.158 [2024-12-09 10:09:29.998450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:23.158 [2024-12-09 10:09:29.998480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.003 ms 00:58:23.158 [2024-12-09 10:09:29.998493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.158 [2024-12-09 10:09:29.998690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.158 [2024-12-09 10:09:29.998712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:23.158 [2024-12-09 10:09:29.998740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:58:23.158 [2024-12-09 10:09:29.998752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.158 [2024-12-09 10:09:30.027400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.158 [2024-12-09 10:09:30.027458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:23.158 [2024-12-09 10:09:30.027491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.622 ms 00:58:23.158 [2024-12-09 10:09:30.027501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.159 [2024-12-09 10:09:30.054511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.159 [2024-12-09 10:09:30.054679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:23.159 [2024-12-09 10:09:30.054706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.948 ms 00:58:23.159 [2024-12-09 10:09:30.054718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.159 [2024-12-09 10:09:30.085214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.159 [2024-12-09 10:09:30.085308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:23.159 [2024-12-09 10:09:30.085327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.429 ms 00:58:23.159 [2024-12-09 10:09:30.085338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.159 [2024-12-09 10:09:30.115904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.159 [2024-12-09 10:09:30.115974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:23.159 [2024-12-09 10:09:30.115991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.414 ms 00:58:23.159 [2024-12-09 10:09:30.116002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.159 [2024-12-09 10:09:30.116081] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:23.159 [2024-12-09 10:09:30.116121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.116997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:23.159 [2024-12-09 10:09:30.117126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:23.160 [2024-12-09 10:09:30.117404] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:23.160 [2024-12-09 10:09:30.117415] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:58:23.160 [2024-12-09 10:09:30.117427] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:23.160 [2024-12-09 10:09:30.117437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:23.160 [2024-12-09 10:09:30.117448] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:23.160 [2024-12-09 10:09:30.117463] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:23.160 [2024-12-09 10:09:30.117473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:23.160 [2024-12-09 10:09:30.117483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:23.160 [2024-12-09 10:09:30.117499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:23.160 [2024-12-09 10:09:30.117509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:23.160 [2024-12-09 10:09:30.117518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:23.160 [2024-12-09 10:09:30.117529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.160 [2024-12-09 10:09:30.117539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:23.160 [2024-12-09 10:09:30.117551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:58:23.160 [2024-12-09 10:09:30.117561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.160 [2024-12-09 10:09:30.134654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.160 [2024-12-09 10:09:30.134907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:23.160 [2024-12-09 10:09:30.134936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.065 ms 00:58:23.160 [2024-12-09 10:09:30.134949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.160 [2024-12-09 10:09:30.135490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:23.160 [2024-12-09 10:09:30.135515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:23.160 [2024-12-09 10:09:30.135530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:58:23.160 [2024-12-09 10:09:30.135541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.160 [2024-12-09 10:09:30.185383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.160 [2024-12-09 10:09:30.185458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:23.160 [2024-12-09 10:09:30.185478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.160 [2024-12-09 10:09:30.185504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.160 [2024-12-09 10:09:30.185671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.160 [2024-12-09 10:09:30.185690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:23.160 [2024-12-09 10:09:30.185703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.160 [2024-12-09 10:09:30.185714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.160 [2024-12-09 10:09:30.185784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.160 [2024-12-09 10:09:30.185802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:23.160 [2024-12-09 10:09:30.185814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.160 [2024-12-09 10:09:30.185826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.160 [2024-12-09 10:09:30.185884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.160 [2024-12-09 10:09:30.185899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:23.160 [2024-12-09 10:09:30.185911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.160 [2024-12-09 10:09:30.185922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.303095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.303424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:23.419 [2024-12-09 10:09:30.303456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.303487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.397538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.397620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:23.419 [2024-12-09 10:09:30.397657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.397669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.397763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.397781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:23.419 [2024-12-09 10:09:30.397793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.397805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.397842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.397900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:23.419 [2024-12-09 10:09:30.397914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.397925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.398052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.398071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:23.419 [2024-12-09 10:09:30.398084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.398095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.398155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.398173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:23.419 [2024-12-09 10:09:30.398203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.398215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.398296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.398315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:23.419 [2024-12-09 10:09:30.398327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.398339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.398396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:23.419 [2024-12-09 10:09:30.398426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:23.419 [2024-12-09 10:09:30.398439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:23.419 [2024-12-09 10:09:30.398450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:23.419 [2024-12-09 10:09:30.398639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.658 ms, result 0 00:58:24.354 00:58:24.354 00:58:24.704 10:09:31 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79173 00:58:24.704 10:09:31 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:58:24.704 10:09:31 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79173 00:58:24.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:24.704 10:09:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79173 ']' 00:58:24.704 10:09:31 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:24.704 10:09:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:24.704 10:09:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:24.704 10:09:31 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:24.704 10:09:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:58:24.704 [2024-12-09 10:09:31.556513] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:24.704 [2024-12-09 10:09:31.556954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79173 ] 00:58:24.963 [2024-12-09 10:09:31.750432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:24.963 [2024-12-09 10:09:31.883412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:25.900 10:09:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:25.900 10:09:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:58:25.900 10:09:32 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:58:26.158 [2024-12-09 10:09:33.146103] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:26.158 [2024-12-09 10:09:33.146187] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:26.418 [2024-12-09 10:09:33.336498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.336588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:58:26.418 [2024-12-09 10:09:33.336644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:58:26.418 [2024-12-09 10:09:33.336689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.340707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.340749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:26.418 [2024-12-09 10:09:33.340784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.989 ms 00:58:26.418 [2024-12-09 10:09:33.340796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.340940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:58:26.418 [2024-12-09 10:09:33.341965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:58:26.418 [2024-12-09 10:09:33.342012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.342028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:26.418 [2024-12-09 10:09:33.342043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:58:26.418 [2024-12-09 10:09:33.342055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.344150] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:58:26.418 [2024-12-09 10:09:33.361583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.361636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:58:26.418 [2024-12-09 10:09:33.361656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.438 ms 00:58:26.418 [2024-12-09 10:09:33.361672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.361791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.361817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:58:26.418 [2024-12-09 10:09:33.361832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:58:26.418 [2024-12-09 10:09:33.361859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.371062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.371283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:26.418 [2024-12-09 10:09:33.371313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.107 ms 00:58:26.418 [2024-12-09 10:09:33.371329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.371490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.371520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:26.418 [2024-12-09 10:09:33.371540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:58:26.418 [2024-12-09 10:09:33.371572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.371616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.371636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:58:26.418 [2024-12-09 10:09:33.371651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:58:26.418 [2024-12-09 10:09:33.371665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.371704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:58:26.418 [2024-12-09 10:09:33.377200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.377235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:26.418 [2024-12-09 10:09:33.377300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.504 ms 00:58:26.418 [2024-12-09 10:09:33.377314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.377407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.377426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:58:26.418 [2024-12-09 10:09:33.377441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:58:26.418 [2024-12-09 10:09:33.377456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.418 [2024-12-09 10:09:33.377490] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:58:26.418 [2024-12-09 10:09:33.377518] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:58:26.418 [2024-12-09 10:09:33.377605] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:58:26.418 [2024-12-09 10:09:33.377637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:58:26.418 [2024-12-09 10:09:33.377779] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:58:26.418 [2024-12-09 10:09:33.377798] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:58:26.418 [2024-12-09 10:09:33.377821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:58:26.418 [2024-12-09 10:09:33.377836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:58:26.418 [2024-12-09 10:09:33.377872] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:58:26.418 [2024-12-09 10:09:33.377887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:58:26.418 [2024-12-09 10:09:33.377900] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:58:26.418 [2024-12-09 10:09:33.377913] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:58:26.418 [2024-12-09 10:09:33.377930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:58:26.418 [2024-12-09 10:09:33.377943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.418 [2024-12-09 10:09:33.377958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:58:26.418 [2024-12-09 10:09:33.377971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:58:26.419 [2024-12-09 10:09:33.377985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.419 [2024-12-09 10:09:33.378088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.419 [2024-12-09 10:09:33.378107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:58:26.419 [2024-12-09 10:09:33.378121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:58:26.419 [2024-12-09 10:09:33.378137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.419 [2024-12-09 10:09:33.378276] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:58:26.419 [2024-12-09 10:09:33.378312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:58:26.419 [2024-12-09 10:09:33.378325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:58:26.419 [2024-12-09 10:09:33.378377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:58:26.419 [2024-12-09 10:09:33.378414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:26.419 [2024-12-09 10:09:33.378437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:58:26.419 [2024-12-09 10:09:33.378449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:58:26.419 [2024-12-09 10:09:33.378458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:26.419 [2024-12-09 10:09:33.378470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:58:26.419 [2024-12-09 10:09:33.378481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:58:26.419 [2024-12-09 10:09:33.378493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:58:26.419 [2024-12-09 10:09:33.378516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:58:26.419 [2024-12-09 10:09:33.378596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:58:26.419 [2024-12-09 10:09:33.378654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:58:26.419 [2024-12-09 10:09:33.378691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:58:26.419 [2024-12-09 10:09:33.378732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:58:26.419 [2024-12-09 10:09:33.378768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:26.419 [2024-12-09 10:09:33.378794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:58:26.419 [2024-12-09 10:09:33.378808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:58:26.419 [2024-12-09 10:09:33.378819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:26.419 [2024-12-09 10:09:33.378833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:58:26.419 [2024-12-09 10:09:33.378845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:58:26.419 [2024-12-09 10:09:33.378860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:58:26.419 [2024-12-09 10:09:33.378885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:58:26.419 [2024-12-09 10:09:33.378912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.378940] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:58:26.419 [2024-12-09 10:09:33.378970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:58:26.419 [2024-12-09 10:09:33.378983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:26.419 [2024-12-09 10:09:33.378993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:26.419 [2024-12-09 10:09:33.379007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:58:26.419 [2024-12-09 10:09:33.379018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:58:26.419 [2024-12-09 10:09:33.379030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:58:26.419 [2024-12-09 10:09:33.379057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:58:26.419 [2024-12-09 10:09:33.379072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:58:26.419 [2024-12-09 10:09:33.379083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:58:26.419 [2024-12-09 10:09:33.379098] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:58:26.419 [2024-12-09 10:09:33.379112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:58:26.419 [2024-12-09 10:09:33.379142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:58:26.419 [2024-12-09 10:09:33.379172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:58:26.419 [2024-12-09 10:09:33.379200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:58:26.419 [2024-12-09 10:09:33.379215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:58:26.419 [2024-12-09 10:09:33.379228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:58:26.419 [2024-12-09 10:09:33.379243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:58:26.419 [2024-12-09 10:09:33.379270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:58:26.419 [2024-12-09 10:09:33.379285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:58:26.419 [2024-12-09 10:09:33.379298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:58:26.419 [2024-12-09 10:09:33.379398] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:58:26.419 [2024-12-09 10:09:33.379425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:26.419 [2024-12-09 10:09:33.379472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:58:26.419 [2024-12-09 10:09:33.379501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:58:26.419 [2024-12-09 10:09:33.379529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:58:26.419 [2024-12-09 10:09:33.379560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.419 [2024-12-09 10:09:33.379572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:58:26.419 [2024-12-09 10:09:33.379588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:58:26.419 [2024-12-09 10:09:33.379602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.419 [2024-12-09 10:09:33.423782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.419 [2024-12-09 10:09:33.423867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:26.419 [2024-12-09 10:09:33.423909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.090 ms 00:58:26.419 [2024-12-09 10:09:33.423926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.419 [2024-12-09 10:09:33.424121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.419 [2024-12-09 10:09:33.424142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:58:26.419 [2024-12-09 10:09:33.424158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:58:26.419 [2024-12-09 10:09:33.424172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.472510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.472579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:26.679 [2024-12-09 10:09:33.472623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.296 ms 00:58:26.679 [2024-12-09 10:09:33.472637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.472835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.472888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:26.679 [2024-12-09 10:09:33.472910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:26.679 [2024-12-09 10:09:33.472924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.473620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.473652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:26.679 [2024-12-09 10:09:33.473674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:58:26.679 [2024-12-09 10:09:33.473687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.473939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.473977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:26.679 [2024-12-09 10:09:33.473999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:58:26.679 [2024-12-09 10:09:33.474012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.497175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.497239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:26.679 [2024-12-09 10:09:33.497342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.125 ms 00:58:26.679 [2024-12-09 10:09:33.497356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.527254] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:58:26.679 [2024-12-09 10:09:33.527724] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:58:26.679 [2024-12-09 10:09:33.527760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.527776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:58:26.679 [2024-12-09 10:09:33.527800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.211 ms 00:58:26.679 [2024-12-09 10:09:33.527827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.561032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.561115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:58:26.679 [2024-12-09 10:09:33.561157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.031 ms 00:58:26.679 [2024-12-09 10:09:33.561171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.579507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.579566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:58:26.679 [2024-12-09 10:09:33.579591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:58:26.679 [2024-12-09 10:09:33.579604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.594157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.594212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:58:26.679 [2024-12-09 10:09:33.594247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.425 ms 00:58:26.679 [2024-12-09 10:09:33.594258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.595282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.595350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:58:26.679 [2024-12-09 10:09:33.595404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:58:26.679 [2024-12-09 10:09:33.595416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.676504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.676570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:58:26.679 [2024-12-09 10:09:33.676609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.034 ms 00:58:26.679 [2024-12-09 10:09:33.676622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.689210] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:58:26.679 [2024-12-09 10:09:33.709971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.710076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:58:26.679 [2024-12-09 10:09:33.710101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.197 ms 00:58:26.679 [2024-12-09 10:09:33.710117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.710365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.710390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:58:26.679 [2024-12-09 10:09:33.710405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:58:26.679 [2024-12-09 10:09:33.710420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.710497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.710518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:58:26.679 [2024-12-09 10:09:33.710531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:58:26.679 [2024-12-09 10:09:33.710549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.710582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.710603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:58:26.679 [2024-12-09 10:09:33.710617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:58:26.679 [2024-12-09 10:09:33.710631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.679 [2024-12-09 10:09:33.710676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:58:26.679 [2024-12-09 10:09:33.710698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.679 [2024-12-09 10:09:33.710745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:58:26.679 [2024-12-09 10:09:33.710775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:58:26.679 [2024-12-09 10:09:33.710804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.939 [2024-12-09 10:09:33.740390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.939 [2024-12-09 10:09:33.740441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:58:26.939 [2024-12-09 10:09:33.740463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.544 ms 00:58:26.939 [2024-12-09 10:09:33.740477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.939 [2024-12-09 10:09:33.740632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:26.939 [2024-12-09 10:09:33.740653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:58:26.939 [2024-12-09 10:09:33.740671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:58:26.939 [2024-12-09 10:09:33.740687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:26.939 [2024-12-09 10:09:33.742015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:26.939 [2024-12-09 10:09:33.746113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.180 ms, result 0 00:58:26.939 [2024-12-09 10:09:33.747521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:26.939 Some configs were skipped because the RPC state that can call them passed over. 00:58:26.939 10:09:33 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:58:27.197 [2024-12-09 10:09:34.085593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:27.197 [2024-12-09 10:09:34.085856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:58:27.197 [2024-12-09 10:09:34.086004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:58:27.197 [2024-12-09 10:09:34.086062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:27.197 [2024-12-09 10:09:34.086229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.540 ms, result 0 00:58:27.197 true 00:58:27.197 10:09:34 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:58:27.456 [2024-12-09 10:09:34.329202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:27.456 [2024-12-09 10:09:34.329444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:58:27.456 [2024-12-09 10:09:34.329575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:58:27.456 [2024-12-09 10:09:34.329627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:27.456 [2024-12-09 10:09:34.329814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.859 ms, result 0 00:58:27.456 true 00:58:27.456 10:09:34 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79173 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79173 ']' 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79173 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79173 00:58:27.456 killing process with pid 79173 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79173' 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79173 00:58:27.456 10:09:34 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79173 00:58:28.390 [2024-12-09 10:09:35.373780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.373923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:28.390 [2024-12-09 10:09:35.373947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:28.390 [2024-12-09 10:09:35.373963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.374000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:58:28.390 [2024-12-09 10:09:35.377691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.377722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:28.390 [2024-12-09 10:09:35.377753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.664 ms 00:58:28.390 [2024-12-09 10:09:35.377765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.378126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.378147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:28.390 [2024-12-09 10:09:35.378163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:58:28.390 [2024-12-09 10:09:35.378191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.382512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.382556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:28.390 [2024-12-09 10:09:35.382579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:58:28.390 [2024-12-09 10:09:35.382592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.389886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.389925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:28.390 [2024-12-09 10:09:35.389947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.244 ms 00:58:28.390 [2024-12-09 10:09:35.389960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.402816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.402870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:28.390 [2024-12-09 10:09:35.402894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.771 ms 00:58:28.390 [2024-12-09 10:09:35.402906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.411879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.411957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:28.390 [2024-12-09 10:09:35.411991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.872 ms 00:58:28.390 [2024-12-09 10:09:35.412003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.412165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.412184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:28.390 [2024-12-09 10:09:35.412199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:58:28.390 [2024-12-09 10:09:35.412210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.390 [2024-12-09 10:09:35.425403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.390 [2024-12-09 10:09:35.425442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:28.390 [2024-12-09 10:09:35.425478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.163 ms 00:58:28.390 [2024-12-09 10:09:35.425489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.650 [2024-12-09 10:09:35.437634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.650 [2024-12-09 10:09:35.437671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:28.650 [2024-12-09 10:09:35.437711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.061 ms 00:58:28.650 [2024-12-09 10:09:35.437723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.650 [2024-12-09 10:09:35.449628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.650 [2024-12-09 10:09:35.449682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:28.650 [2024-12-09 10:09:35.449716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.832 ms 00:58:28.650 [2024-12-09 10:09:35.449727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.650 [2024-12-09 10:09:35.461207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.650 [2024-12-09 10:09:35.461245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:28.650 [2024-12-09 10:09:35.461313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.385 ms 00:58:28.650 [2024-12-09 10:09:35.461326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.650 [2024-12-09 10:09:35.461408] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:28.650 [2024-12-09 10:09:35.461434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:28.650 [2024-12-09 10:09:35.461594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.461997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.462997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:28.651 [2024-12-09 10:09:35.463032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:28.651 [2024-12-09 10:09:35.463054] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:58:28.652 [2024-12-09 10:09:35.463070] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:28.652 [2024-12-09 10:09:35.463085] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:28.652 [2024-12-09 10:09:35.463096] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:28.652 [2024-12-09 10:09:35.463111] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:28.652 [2024-12-09 10:09:35.463123] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:28.652 [2024-12-09 10:09:35.463137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:28.652 [2024-12-09 10:09:35.463149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:28.652 [2024-12-09 10:09:35.463162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:28.652 [2024-12-09 10:09:35.463173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:28.652 [2024-12-09 10:09:35.463187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.652 [2024-12-09 10:09:35.463214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:28.652 [2024-12-09 10:09:35.463230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:58:28.652 [2024-12-09 10:09:35.463241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.479746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.652 [2024-12-09 10:09:35.479787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:28.652 [2024-12-09 10:09:35.479826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.422 ms 00:58:28.652 [2024-12-09 10:09:35.479838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.480381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:28.652 [2024-12-09 10:09:35.480437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:28.652 [2024-12-09 10:09:35.480459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:58:28.652 [2024-12-09 10:09:35.480472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.537909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.652 [2024-12-09 10:09:35.537967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:28.652 [2024-12-09 10:09:35.537991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.652 [2024-12-09 10:09:35.538004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.538173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.652 [2024-12-09 10:09:35.538208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:28.652 [2024-12-09 10:09:35.538228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.652 [2024-12-09 10:09:35.538240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.538355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.652 [2024-12-09 10:09:35.538387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:28.652 [2024-12-09 10:09:35.538419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.652 [2024-12-09 10:09:35.538431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.538463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.652 [2024-12-09 10:09:35.538479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:28.652 [2024-12-09 10:09:35.538494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.652 [2024-12-09 10:09:35.538509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.652 [2024-12-09 10:09:35.640684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.652 [2024-12-09 10:09:35.640911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:28.652 [2024-12-09 10:09:35.640948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.652 [2024-12-09 10:09:35.640963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.733782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.733863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:28.911 [2024-12-09 10:09:35.733889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.733906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.734051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:28.911 [2024-12-09 10:09:35.734072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.734085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.734153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:28.911 [2024-12-09 10:09:35.734168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.734181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.734370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:28.911 [2024-12-09 10:09:35.734387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.734399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.734482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:28.911 [2024-12-09 10:09:35.734498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.734511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.734585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:28.911 [2024-12-09 10:09:35.734604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.734617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:28.911 [2024-12-09 10:09:35.734747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:28.911 [2024-12-09 10:09:35.734770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:28.911 [2024-12-09 10:09:35.734793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:28.911 [2024-12-09 10:09:35.734971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.161 ms, result 0 00:58:29.846 10:09:36 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:58:29.846 [2024-12-09 10:09:36.783323] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:29.846 [2024-12-09 10:09:36.783546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79237 ] 00:58:30.105 [2024-12-09 10:09:36.965172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:30.105 [2024-12-09 10:09:37.088134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:30.674 [2024-12-09 10:09:37.444148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:30.674 [2024-12-09 10:09:37.444252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:58:30.674 [2024-12-09 10:09:37.607640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.607713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:58:30.674 [2024-12-09 10:09:37.607735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:58:30.674 [2024-12-09 10:09:37.607747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.611353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.611402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:30.674 [2024-12-09 10:09:37.611420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:58:30.674 [2024-12-09 10:09:37.611432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.611578] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:58:30.674 [2024-12-09 10:09:37.612515] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:58:30.674 [2024-12-09 10:09:37.612693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.612714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:30.674 [2024-12-09 10:09:37.612728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:58:30.674 [2024-12-09 10:09:37.612741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.614823] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:58:30.674 [2024-12-09 10:09:37.632187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.632245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:58:30.674 [2024-12-09 10:09:37.632285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.366 ms 00:58:30.674 [2024-12-09 10:09:37.632298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.632431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.632453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:58:30.674 [2024-12-09 10:09:37.632467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:58:30.674 [2024-12-09 10:09:37.632479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.641337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.641396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:30.674 [2024-12-09 10:09:37.641415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.790 ms 00:58:30.674 [2024-12-09 10:09:37.641427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.641587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.641609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:30.674 [2024-12-09 10:09:37.641623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:58:30.674 [2024-12-09 10:09:37.641635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.641685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.641701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:58:30.674 [2024-12-09 10:09:37.641714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:58:30.674 [2024-12-09 10:09:37.641726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.641761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:58:30.674 [2024-12-09 10:09:37.647081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.647140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:30.674 [2024-12-09 10:09:37.647158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:58:30.674 [2024-12-09 10:09:37.647185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.647318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.647338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:58:30.674 [2024-12-09 10:09:37.647352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:58:30.674 [2024-12-09 10:09:37.647363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.647402] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:58:30.674 [2024-12-09 10:09:37.647434] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:58:30.674 [2024-12-09 10:09:37.647477] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:58:30.674 [2024-12-09 10:09:37.647498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:58:30.674 [2024-12-09 10:09:37.647608] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:58:30.674 [2024-12-09 10:09:37.647624] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:58:30.674 [2024-12-09 10:09:37.647639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:58:30.674 [2024-12-09 10:09:37.647659] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:58:30.674 [2024-12-09 10:09:37.647673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:58:30.674 [2024-12-09 10:09:37.647686] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:58:30.674 [2024-12-09 10:09:37.647697] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:58:30.674 [2024-12-09 10:09:37.647709] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:58:30.674 [2024-12-09 10:09:37.647720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:58:30.674 [2024-12-09 10:09:37.647733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.647744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:58:30.674 [2024-12-09 10:09:37.647756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:58:30.674 [2024-12-09 10:09:37.647768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.647868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.674 [2024-12-09 10:09:37.647890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:58:30.674 [2024-12-09 10:09:37.647902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:58:30.674 [2024-12-09 10:09:37.647914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.674 [2024-12-09 10:09:37.648035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:58:30.674 [2024-12-09 10:09:37.648054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:58:30.674 [2024-12-09 10:09:37.648067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:58:30.674 [2024-12-09 10:09:37.648103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:58:30.674 [2024-12-09 10:09:37.648138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:30.674 [2024-12-09 10:09:37.648159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:58:30.674 [2024-12-09 10:09:37.648184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:58:30.674 [2024-12-09 10:09:37.648195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:30.674 [2024-12-09 10:09:37.648206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:58:30.674 [2024-12-09 10:09:37.648217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:58:30.674 [2024-12-09 10:09:37.648228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:58:30.674 [2024-12-09 10:09:37.648267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:58:30.674 [2024-12-09 10:09:37.648303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:58:30.674 [2024-12-09 10:09:37.648335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:58:30.674 [2024-12-09 10:09:37.648367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:58:30.674 [2024-12-09 10:09:37.648399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:30.674 [2024-12-09 10:09:37.648419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:58:30.674 [2024-12-09 10:09:37.648431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:58:30.674 [2024-12-09 10:09:37.648442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:30.674 [2024-12-09 10:09:37.648453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:58:30.675 [2024-12-09 10:09:37.648464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:58:30.675 [2024-12-09 10:09:37.648474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:30.675 [2024-12-09 10:09:37.648485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:58:30.675 [2024-12-09 10:09:37.648495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:58:30.675 [2024-12-09 10:09:37.648506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:30.675 [2024-12-09 10:09:37.648516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:58:30.675 [2024-12-09 10:09:37.648527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:58:30.675 [2024-12-09 10:09:37.648539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:30.675 [2024-12-09 10:09:37.648550] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:58:30.675 [2024-12-09 10:09:37.648562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:58:30.675 [2024-12-09 10:09:37.648579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:30.675 [2024-12-09 10:09:37.648591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:30.675 [2024-12-09 10:09:37.648603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:58:30.675 [2024-12-09 10:09:37.648615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:58:30.675 [2024-12-09 10:09:37.648626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:58:30.675 [2024-12-09 10:09:37.648637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:58:30.675 [2024-12-09 10:09:37.648647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:58:30.675 [2024-12-09 10:09:37.648658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:58:30.675 [2024-12-09 10:09:37.648671] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:58:30.675 [2024-12-09 10:09:37.648686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:58:30.675 [2024-12-09 10:09:37.648711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:58:30.675 [2024-12-09 10:09:37.648723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:58:30.675 [2024-12-09 10:09:37.648736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:58:30.675 [2024-12-09 10:09:37.648747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:58:30.675 [2024-12-09 10:09:37.648759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:58:30.675 [2024-12-09 10:09:37.648771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:58:30.675 [2024-12-09 10:09:37.648783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:58:30.675 [2024-12-09 10:09:37.648795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:58:30.675 [2024-12-09 10:09:37.648807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:58:30.675 [2024-12-09 10:09:37.648868] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:58:30.675 [2024-12-09 10:09:37.648881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:30.675 [2024-12-09 10:09:37.648908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:58:30.675 [2024-12-09 10:09:37.648920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:58:30.675 [2024-12-09 10:09:37.648933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:58:30.675 [2024-12-09 10:09:37.648946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.675 [2024-12-09 10:09:37.648963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:58:30.675 [2024-12-09 10:09:37.648976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:58:30.675 [2024-12-09 10:09:37.648987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.675 [2024-12-09 10:09:37.690130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.675 [2024-12-09 10:09:37.690477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:30.675 [2024-12-09 10:09:37.690619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.058 ms 00:58:30.675 [2024-12-09 10:09:37.690679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.675 [2024-12-09 10:09:37.691115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.675 [2024-12-09 10:09:37.691272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:58:30.675 [2024-12-09 10:09:37.691398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:58:30.675 [2024-12-09 10:09:37.691545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.939 [2024-12-09 10:09:37.752676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.939 [2024-12-09 10:09:37.752932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:30.939 [2024-12-09 10:09:37.753055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.035 ms 00:58:30.939 [2024-12-09 10:09:37.753112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.939 [2024-12-09 10:09:37.753418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.939 [2024-12-09 10:09:37.753537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:30.939 [2024-12-09 10:09:37.753639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:58:30.939 [2024-12-09 10:09:37.753777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.939 [2024-12-09 10:09:37.754460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.939 [2024-12-09 10:09:37.754598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:30.939 [2024-12-09 10:09:37.754715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:58:30.939 [2024-12-09 10:09:37.754814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.939 [2024-12-09 10:09:37.755033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.939 [2024-12-09 10:09:37.755103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:30.939 [2024-12-09 10:09:37.755268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:58:30.939 [2024-12-09 10:09:37.755322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.939 [2024-12-09 10:09:37.775763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.939 [2024-12-09 10:09:37.775982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:30.940 [2024-12-09 10:09:37.776122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.326 ms 00:58:30.940 [2024-12-09 10:09:37.776171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.793617] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:58:30.940 [2024-12-09 10:09:37.793873] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:58:30.940 [2024-12-09 10:09:37.794099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.940 [2024-12-09 10:09:37.794213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:58:30.940 [2024-12-09 10:09:37.794240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.612 ms 00:58:30.940 [2024-12-09 10:09:37.794277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.824393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.940 [2024-12-09 10:09:37.824474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:58:30.940 [2024-12-09 10:09:37.824496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.962 ms 00:58:30.940 [2024-12-09 10:09:37.824510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.841418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.940 [2024-12-09 10:09:37.841493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:58:30.940 [2024-12-09 10:09:37.841514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.744 ms 00:58:30.940 [2024-12-09 10:09:37.841526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.858361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.940 [2024-12-09 10:09:37.858482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:58:30.940 [2024-12-09 10:09:37.858532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.680 ms 00:58:30.940 [2024-12-09 10:09:37.858557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.859785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.940 [2024-12-09 10:09:37.859980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:58:30.940 [2024-12-09 10:09:37.860014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:58:30.940 [2024-12-09 10:09:37.860037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.946204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:30.940 [2024-12-09 10:09:37.946310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:58:30.940 [2024-12-09 10:09:37.946335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.113 ms 00:58:30.940 [2024-12-09 10:09:37.946349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:30.940 [2024-12-09 10:09:37.961865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:58:31.219 [2024-12-09 10:09:37.985882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:37.986015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:58:31.219 [2024-12-09 10:09:37.986063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.318 ms 00:58:31.219 [2024-12-09 10:09:37.986083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:37.986379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:37.986403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:58:31.219 [2024-12-09 10:09:37.986418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:58:31.219 [2024-12-09 10:09:37.986431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:37.986521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:37.986538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:58:31.219 [2024-12-09 10:09:37.986557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:58:31.219 [2024-12-09 10:09:37.986572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:37.986618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:37.986636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:58:31.219 [2024-12-09 10:09:37.986658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:58:31.219 [2024-12-09 10:09:37.986670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:37.986718] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:58:31.219 [2024-12-09 10:09:37.986735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:37.986747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:58:31.219 [2024-12-09 10:09:37.986760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:58:31.219 [2024-12-09 10:09:37.986771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:38.020377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:38.020453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:58:31.219 [2024-12-09 10:09:38.020486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.568 ms 00:58:31.219 [2024-12-09 10:09:38.020499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:38.020689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:31.219 [2024-12-09 10:09:38.020710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:58:31.219 [2024-12-09 10:09:38.020724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:58:31.219 [2024-12-09 10:09:38.020747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:31.219 [2024-12-09 10:09:38.022000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:31.219 [2024-12-09 10:09:38.026711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.976 ms, result 0 00:58:31.219 [2024-12-09 10:09:38.027652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:31.219 [2024-12-09 10:09:38.044086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:32.156  [2024-12-09T10:09:40.136Z] Copying: 25/256 [MB] (25 MBps) [2024-12-09T10:09:41.512Z] Copying: 49/256 [MB] (23 MBps) [2024-12-09T10:09:42.449Z] Copying: 72/256 [MB] (22 MBps) [2024-12-09T10:09:43.387Z] Copying: 94/256 [MB] (22 MBps) [2024-12-09T10:09:44.352Z] Copying: 116/256 [MB] (21 MBps) [2024-12-09T10:09:45.290Z] Copying: 138/256 [MB] (21 MBps) [2024-12-09T10:09:46.229Z] Copying: 160/256 [MB] (22 MBps) [2024-12-09T10:09:47.164Z] Copying: 183/256 [MB] (22 MBps) [2024-12-09T10:09:48.540Z] Copying: 207/256 [MB] (23 MBps) [2024-12-09T10:09:49.107Z] Copying: 230/256 [MB] (23 MBps) [2024-12-09T10:09:49.366Z] Copying: 253/256 [MB] (23 MBps) [2024-12-09T10:09:49.625Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-09 10:09:49.516183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:42.581 [2024-12-09 10:09:49.531692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.581 [2024-12-09 10:09:49.531905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:42.582 [2024-12-09 10:09:49.531948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:42.582 [2024-12-09 10:09:49.531962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.532003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:58:42.582 [2024-12-09 10:09:49.535727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.535763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:42.582 [2024-12-09 10:09:49.535779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:58:42.582 [2024-12-09 10:09:49.535791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.536096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.536114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:42.582 [2024-12-09 10:09:49.536128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:58:42.582 [2024-12-09 10:09:49.536139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.540292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.540323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:42.582 [2024-12-09 10:09:49.540353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:58:42.582 [2024-12-09 10:09:49.540364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.547624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.547657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:42.582 [2024-12-09 10:09:49.547688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.235 ms 00:58:42.582 [2024-12-09 10:09:49.547699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.576637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.576680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:42.582 [2024-12-09 10:09:49.576713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.844 ms 00:58:42.582 [2024-12-09 10:09:49.576724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.593907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.593964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:42.582 [2024-12-09 10:09:49.593999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.122 ms 00:58:42.582 [2024-12-09 10:09:49.594010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.582 [2024-12-09 10:09:49.594156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.582 [2024-12-09 10:09:49.594178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:42.582 [2024-12-09 10:09:49.594220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:58:42.582 [2024-12-09 10:09:49.594232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.842 [2024-12-09 10:09:49.625500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.842 [2024-12-09 10:09:49.625549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:42.842 [2024-12-09 10:09:49.625567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.243 ms 00:58:42.842 [2024-12-09 10:09:49.625579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.842 [2024-12-09 10:09:49.656106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.842 [2024-12-09 10:09:49.656148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:42.842 [2024-12-09 10:09:49.656182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.424 ms 00:58:42.842 [2024-12-09 10:09:49.656192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.842 [2024-12-09 10:09:49.685009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.842 [2024-12-09 10:09:49.685225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:42.842 [2024-12-09 10:09:49.685274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.724 ms 00:58:42.842 [2024-12-09 10:09:49.685290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.842 [2024-12-09 10:09:49.714275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.842 [2024-12-09 10:09:49.714343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:42.842 [2024-12-09 10:09:49.714392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.881 ms 00:58:42.842 [2024-12-09 10:09:49.714402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.842 [2024-12-09 10:09:49.714465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:42.842 [2024-12-09 10:09:49.714489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:42.842 [2024-12-09 10:09:49.714633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.714990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:42.843 [2024-12-09 10:09:49.715837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:42.843 [2024-12-09 10:09:49.715849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68115f76-6e25-4e17-a078-1a730c2e63d7 00:58:42.843 [2024-12-09 10:09:49.715861] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:42.843 [2024-12-09 10:09:49.715873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:42.843 [2024-12-09 10:09:49.715884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:42.843 [2024-12-09 10:09:49.715896] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:42.844 [2024-12-09 10:09:49.715907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:42.844 [2024-12-09 10:09:49.715924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:42.844 [2024-12-09 10:09:49.715935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:42.844 [2024-12-09 10:09:49.715945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:42.844 [2024-12-09 10:09:49.715955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:42.844 [2024-12-09 10:09:49.715967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.844 [2024-12-09 10:09:49.715978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:42.844 [2024-12-09 10:09:49.715991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:58:42.844 [2024-12-09 10:09:49.716002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.732371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.844 [2024-12-09 10:09:49.732411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:42.844 [2024-12-09 10:09:49.732445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.341 ms 00:58:42.844 [2024-12-09 10:09:49.732464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.732996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:42.844 [2024-12-09 10:09:49.733030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:42.844 [2024-12-09 10:09:49.733046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:58:42.844 [2024-12-09 10:09:49.733057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.780636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:42.844 [2024-12-09 10:09:49.780721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:42.844 [2024-12-09 10:09:49.780761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:42.844 [2024-12-09 10:09:49.780773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.780897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:42.844 [2024-12-09 10:09:49.780915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:42.844 [2024-12-09 10:09:49.780928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:42.844 [2024-12-09 10:09:49.780947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.781013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:42.844 [2024-12-09 10:09:49.781047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:42.844 [2024-12-09 10:09:49.781060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:42.844 [2024-12-09 10:09:49.781072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.781104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:42.844 [2024-12-09 10:09:49.781118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:42.844 [2024-12-09 10:09:49.781131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:42.844 [2024-12-09 10:09:49.781142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:42.844 [2024-12-09 10:09:49.883607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:42.844 [2024-12-09 10:09:49.883681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:42.844 [2024-12-09 10:09:49.883715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:42.844 [2024-12-09 10:09:49.883739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:43.103 [2024-12-09 10:09:49.964193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:43.103 [2024-12-09 10:09:49.964380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:43.103 [2024-12-09 10:09:49.964465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:43.103 [2024-12-09 10:09:49.964636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:43.103 [2024-12-09 10:09:49.964760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:43.103 [2024-12-09 10:09:49.964841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.964914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:43.103 [2024-12-09 10:09:49.964931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:43.103 [2024-12-09 10:09:49.964942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:43.103 [2024-12-09 10:09:49.964952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:43.103 [2024-12-09 10:09:49.965109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 433.416 ms, result 0 00:58:44.041 00:58:44.041 00:58:44.041 10:09:50 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:58:44.609 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:58:44.609 Process with pid 79173 is not found 00:58:44.609 10:09:51 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79173 00:58:44.609 10:09:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79173 ']' 00:58:44.609 10:09:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79173 00:58:44.609 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79173) - No such process 00:58:44.609 10:09:51 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79173 is not found' 00:58:44.609 ************************************ 00:58:44.609 END TEST ftl_trim 00:58:44.609 ************************************ 00:58:44.609 00:58:44.609 real 1m13.378s 00:58:44.609 user 1m40.809s 00:58:44.609 sys 0m8.192s 00:58:44.609 10:09:51 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:44.609 10:09:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:58:44.609 10:09:51 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:58:44.609 10:09:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:58:44.609 10:09:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:44.609 10:09:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:58:44.609 ************************************ 00:58:44.609 START TEST ftl_restore 00:58:44.609 ************************************ 00:58:44.609 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:58:44.869 * Looking for test storage... 00:58:44.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:44.869 10:09:51 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:44.869 --rc genhtml_branch_coverage=1 00:58:44.869 --rc genhtml_function_coverage=1 00:58:44.869 --rc genhtml_legend=1 00:58:44.869 --rc geninfo_all_blocks=1 00:58:44.869 --rc geninfo_unexecuted_blocks=1 00:58:44.869 00:58:44.869 ' 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:44.869 --rc genhtml_branch_coverage=1 00:58:44.869 --rc genhtml_function_coverage=1 00:58:44.869 --rc genhtml_legend=1 00:58:44.869 --rc geninfo_all_blocks=1 00:58:44.869 --rc geninfo_unexecuted_blocks=1 00:58:44.869 00:58:44.869 ' 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:44.869 --rc genhtml_branch_coverage=1 00:58:44.869 --rc genhtml_function_coverage=1 00:58:44.869 --rc genhtml_legend=1 00:58:44.869 --rc geninfo_all_blocks=1 00:58:44.869 --rc geninfo_unexecuted_blocks=1 00:58:44.869 00:58:44.869 ' 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:44.869 --rc genhtml_branch_coverage=1 00:58:44.869 --rc genhtml_function_coverage=1 00:58:44.869 --rc genhtml_legend=1 00:58:44.869 --rc geninfo_all_blocks=1 00:58:44.869 --rc geninfo_unexecuted_blocks=1 00:58:44.869 00:58:44.869 ' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.KP8K6hnmV5 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79453 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:58:44.869 10:09:51 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79453 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79453 ']' 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:44.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:44.869 10:09:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:58:45.128 [2024-12-09 10:09:51.940787] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:45.129 [2024-12-09 10:09:51.941287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79453 ] 00:58:45.129 [2024-12-09 10:09:52.135136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:45.392 [2024-12-09 10:09:52.297318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:46.330 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:46.330 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:58:46.330 10:09:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:58:46.330 10:09:53 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:58:46.330 10:09:53 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:58:46.330 10:09:53 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:58:46.330 10:09:53 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:58:46.330 10:09:53 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:58:46.589 10:09:53 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:58:46.589 10:09:53 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:58:46.589 10:09:53 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:58:46.589 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:58:46.589 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:58:46.589 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:58:46.589 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:58:46.589 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:58:46.848 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:58:46.848 { 00:58:46.848 "name": "nvme0n1", 00:58:46.848 "aliases": [ 00:58:46.848 "71c9c6d8-1947-4b87-a2b8-750f34ab83da" 00:58:46.848 ], 00:58:46.848 "product_name": "NVMe disk", 00:58:46.848 "block_size": 4096, 00:58:46.848 "num_blocks": 1310720, 00:58:46.848 "uuid": "71c9c6d8-1947-4b87-a2b8-750f34ab83da", 00:58:46.848 "numa_id": -1, 00:58:46.848 "assigned_rate_limits": { 00:58:46.848 "rw_ios_per_sec": 0, 00:58:46.848 "rw_mbytes_per_sec": 0, 00:58:46.848 "r_mbytes_per_sec": 0, 00:58:46.848 "w_mbytes_per_sec": 0 00:58:46.848 }, 00:58:46.848 "claimed": true, 00:58:46.848 "claim_type": "read_many_write_one", 00:58:46.848 "zoned": false, 00:58:46.848 "supported_io_types": { 00:58:46.848 "read": true, 00:58:46.848 "write": true, 00:58:46.848 "unmap": true, 00:58:46.848 "flush": true, 00:58:46.848 "reset": true, 00:58:46.848 "nvme_admin": true, 00:58:46.848 "nvme_io": true, 00:58:46.849 "nvme_io_md": false, 00:58:46.849 "write_zeroes": true, 00:58:46.849 "zcopy": false, 00:58:46.849 "get_zone_info": false, 00:58:46.849 "zone_management": false, 00:58:46.849 "zone_append": false, 00:58:46.849 "compare": true, 00:58:46.849 "compare_and_write": false, 00:58:46.849 "abort": true, 00:58:46.849 "seek_hole": false, 00:58:46.849 "seek_data": false, 00:58:46.849 "copy": true, 00:58:46.849 "nvme_iov_md": false 00:58:46.849 }, 00:58:46.849 "driver_specific": { 00:58:46.849 "nvme": [ 00:58:46.849 { 00:58:46.849 "pci_address": "0000:00:11.0", 00:58:46.849 "trid": { 00:58:46.849 "trtype": "PCIe", 00:58:46.849 "traddr": "0000:00:11.0" 00:58:46.849 }, 00:58:46.849 "ctrlr_data": { 00:58:46.849 "cntlid": 0, 00:58:46.849 "vendor_id": "0x1b36", 00:58:46.849 "model_number": "QEMU NVMe Ctrl", 00:58:46.849 "serial_number": "12341", 00:58:46.849 "firmware_revision": "8.0.0", 00:58:46.849 "subnqn": "nqn.2019-08.org.qemu:12341", 00:58:46.849 "oacs": { 00:58:46.849 "security": 0, 00:58:46.849 "format": 1, 00:58:46.849 "firmware": 0, 00:58:46.849 "ns_manage": 1 00:58:46.849 }, 00:58:46.849 "multi_ctrlr": false, 00:58:46.849 "ana_reporting": false 00:58:46.849 }, 00:58:46.849 "vs": { 00:58:46.849 "nvme_version": "1.4" 00:58:46.849 }, 00:58:46.849 "ns_data": { 00:58:46.849 "id": 1, 00:58:46.849 "can_share": false 00:58:46.849 } 00:58:46.849 } 00:58:46.849 ], 00:58:46.849 "mp_policy": "active_passive" 00:58:46.849 } 00:58:46.849 } 00:58:46.849 ]' 00:58:46.849 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:58:46.849 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:58:46.849 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:58:47.108 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:58:47.108 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:58:47.108 10:09:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:58:47.108 10:09:53 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:58:47.108 10:09:53 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:58:47.108 10:09:53 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:58:47.108 10:09:53 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:58:47.108 10:09:53 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:58:47.367 10:09:54 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=63098526-a835-47de-a344-ef77f1bd38e5 00:58:47.367 10:09:54 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:58:47.367 10:09:54 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63098526-a835-47de-a344-ef77f1bd38e5 00:58:47.626 10:09:54 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:58:47.885 10:09:54 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=40fa4415-531a-41a1-8e9a-aa32eea5f7fc 00:58:47.885 10:09:54 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 40fa4415-531a-41a1-8e9a-aa32eea5f7fc 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:58:48.148 10:09:55 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.148 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.148 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:58:48.148 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:58:48.148 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:58:48.148 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.412 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:58:48.412 { 00:58:48.412 "name": "3e923f90-dfc9-4b72-8bdb-bcd2bf500489", 00:58:48.412 "aliases": [ 00:58:48.412 "lvs/nvme0n1p0" 00:58:48.412 ], 00:58:48.412 "product_name": "Logical Volume", 00:58:48.412 "block_size": 4096, 00:58:48.412 "num_blocks": 26476544, 00:58:48.412 "uuid": "3e923f90-dfc9-4b72-8bdb-bcd2bf500489", 00:58:48.412 "assigned_rate_limits": { 00:58:48.412 "rw_ios_per_sec": 0, 00:58:48.412 "rw_mbytes_per_sec": 0, 00:58:48.412 "r_mbytes_per_sec": 0, 00:58:48.412 "w_mbytes_per_sec": 0 00:58:48.412 }, 00:58:48.412 "claimed": false, 00:58:48.412 "zoned": false, 00:58:48.412 "supported_io_types": { 00:58:48.412 "read": true, 00:58:48.412 "write": true, 00:58:48.412 "unmap": true, 00:58:48.412 "flush": false, 00:58:48.412 "reset": true, 00:58:48.412 "nvme_admin": false, 00:58:48.412 "nvme_io": false, 00:58:48.412 "nvme_io_md": false, 00:58:48.412 "write_zeroes": true, 00:58:48.412 "zcopy": false, 00:58:48.412 "get_zone_info": false, 00:58:48.412 "zone_management": false, 00:58:48.412 "zone_append": false, 00:58:48.412 "compare": false, 00:58:48.412 "compare_and_write": false, 00:58:48.412 "abort": false, 00:58:48.412 "seek_hole": true, 00:58:48.412 "seek_data": true, 00:58:48.412 "copy": false, 00:58:48.412 "nvme_iov_md": false 00:58:48.412 }, 00:58:48.412 "driver_specific": { 00:58:48.412 "lvol": { 00:58:48.412 "lvol_store_uuid": "40fa4415-531a-41a1-8e9a-aa32eea5f7fc", 00:58:48.412 "base_bdev": "nvme0n1", 00:58:48.412 "thin_provision": true, 00:58:48.412 "num_allocated_clusters": 0, 00:58:48.412 "snapshot": false, 00:58:48.412 "clone": false, 00:58:48.412 "esnap_clone": false 00:58:48.412 } 00:58:48.412 } 00:58:48.412 } 00:58:48.412 ]' 00:58:48.412 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:58:48.412 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:58:48.412 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:58:48.671 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:58:48.671 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:58:48.671 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:58:48.671 10:09:55 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:58:48.671 10:09:55 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:58:48.671 10:09:55 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:58:48.930 10:09:55 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:58:48.930 10:09:55 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:58:48.930 10:09:55 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.930 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:48.930 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:58:48.930 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:58:48.930 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:58:48.930 10:09:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:58:49.189 { 00:58:49.189 "name": "3e923f90-dfc9-4b72-8bdb-bcd2bf500489", 00:58:49.189 "aliases": [ 00:58:49.189 "lvs/nvme0n1p0" 00:58:49.189 ], 00:58:49.189 "product_name": "Logical Volume", 00:58:49.189 "block_size": 4096, 00:58:49.189 "num_blocks": 26476544, 00:58:49.189 "uuid": "3e923f90-dfc9-4b72-8bdb-bcd2bf500489", 00:58:49.189 "assigned_rate_limits": { 00:58:49.189 "rw_ios_per_sec": 0, 00:58:49.189 "rw_mbytes_per_sec": 0, 00:58:49.189 "r_mbytes_per_sec": 0, 00:58:49.189 "w_mbytes_per_sec": 0 00:58:49.189 }, 00:58:49.189 "claimed": false, 00:58:49.189 "zoned": false, 00:58:49.189 "supported_io_types": { 00:58:49.189 "read": true, 00:58:49.189 "write": true, 00:58:49.189 "unmap": true, 00:58:49.189 "flush": false, 00:58:49.189 "reset": true, 00:58:49.189 "nvme_admin": false, 00:58:49.189 "nvme_io": false, 00:58:49.189 "nvme_io_md": false, 00:58:49.189 "write_zeroes": true, 00:58:49.189 "zcopy": false, 00:58:49.189 "get_zone_info": false, 00:58:49.189 "zone_management": false, 00:58:49.189 "zone_append": false, 00:58:49.189 "compare": false, 00:58:49.189 "compare_and_write": false, 00:58:49.189 "abort": false, 00:58:49.189 "seek_hole": true, 00:58:49.189 "seek_data": true, 00:58:49.189 "copy": false, 00:58:49.189 "nvme_iov_md": false 00:58:49.189 }, 00:58:49.189 "driver_specific": { 00:58:49.189 "lvol": { 00:58:49.189 "lvol_store_uuid": "40fa4415-531a-41a1-8e9a-aa32eea5f7fc", 00:58:49.189 "base_bdev": "nvme0n1", 00:58:49.189 "thin_provision": true, 00:58:49.189 "num_allocated_clusters": 0, 00:58:49.189 "snapshot": false, 00:58:49.189 "clone": false, 00:58:49.189 "esnap_clone": false 00:58:49.189 } 00:58:49.189 } 00:58:49.189 } 00:58:49.189 ]' 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:58:49.189 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:58:49.189 10:09:56 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:58:49.189 10:09:56 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:58:49.579 10:09:56 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:58:49.579 10:09:56 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:49.579 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:49.579 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:58:49.579 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:58:49.579 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:58:49.579 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:58:49.839 { 00:58:49.839 "name": "3e923f90-dfc9-4b72-8bdb-bcd2bf500489", 00:58:49.839 "aliases": [ 00:58:49.839 "lvs/nvme0n1p0" 00:58:49.839 ], 00:58:49.839 "product_name": "Logical Volume", 00:58:49.839 "block_size": 4096, 00:58:49.839 "num_blocks": 26476544, 00:58:49.839 "uuid": "3e923f90-dfc9-4b72-8bdb-bcd2bf500489", 00:58:49.839 "assigned_rate_limits": { 00:58:49.839 "rw_ios_per_sec": 0, 00:58:49.839 "rw_mbytes_per_sec": 0, 00:58:49.839 "r_mbytes_per_sec": 0, 00:58:49.839 "w_mbytes_per_sec": 0 00:58:49.839 }, 00:58:49.839 "claimed": false, 00:58:49.839 "zoned": false, 00:58:49.839 "supported_io_types": { 00:58:49.839 "read": true, 00:58:49.839 "write": true, 00:58:49.839 "unmap": true, 00:58:49.839 "flush": false, 00:58:49.839 "reset": true, 00:58:49.839 "nvme_admin": false, 00:58:49.839 "nvme_io": false, 00:58:49.839 "nvme_io_md": false, 00:58:49.839 "write_zeroes": true, 00:58:49.839 "zcopy": false, 00:58:49.839 "get_zone_info": false, 00:58:49.839 "zone_management": false, 00:58:49.839 "zone_append": false, 00:58:49.839 "compare": false, 00:58:49.839 "compare_and_write": false, 00:58:49.839 "abort": false, 00:58:49.839 "seek_hole": true, 00:58:49.839 "seek_data": true, 00:58:49.839 "copy": false, 00:58:49.839 "nvme_iov_md": false 00:58:49.839 }, 00:58:49.839 "driver_specific": { 00:58:49.839 "lvol": { 00:58:49.839 "lvol_store_uuid": "40fa4415-531a-41a1-8e9a-aa32eea5f7fc", 00:58:49.839 "base_bdev": "nvme0n1", 00:58:49.839 "thin_provision": true, 00:58:49.839 "num_allocated_clusters": 0, 00:58:49.839 "snapshot": false, 00:58:49.839 "clone": false, 00:58:49.839 "esnap_clone": false 00:58:49.839 } 00:58:49.839 } 00:58:49.839 } 00:58:49.839 ]' 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:58:49.839 10:09:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 --l2p_dram_limit 10' 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:58:49.839 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:58:49.839 10:09:56 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3e923f90-dfc9-4b72-8bdb-bcd2bf500489 --l2p_dram_limit 10 -c nvc0n1p0 00:58:50.099 [2024-12-09 10:09:57.112857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.112931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:58:50.099 [2024-12-09 10:09:57.112958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:58:50.099 [2024-12-09 10:09:57.112972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.113057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.113075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:50.099 [2024-12-09 10:09:57.113092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:58:50.099 [2024-12-09 10:09:57.113104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.113146] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:58:50.099 [2024-12-09 10:09:57.114224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:58:50.099 [2024-12-09 10:09:57.114281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.114297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:50.099 [2024-12-09 10:09:57.114317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:58:50.099 [2024-12-09 10:09:57.114329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.114480] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb 00:58:50.099 [2024-12-09 10:09:57.116379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.116424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:58:50.099 [2024-12-09 10:09:57.116442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:58:50.099 [2024-12-09 10:09:57.116460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.126544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.126787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:50.099 [2024-12-09 10:09:57.126827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.024 ms 00:58:50.099 [2024-12-09 10:09:57.126850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.126992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.127016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:50.099 [2024-12-09 10:09:57.127030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:58:50.099 [2024-12-09 10:09:57.127056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.127154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.127178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:58:50.099 [2024-12-09 10:09:57.127194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:58:50.099 [2024-12-09 10:09:57.127214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.127277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:58:50.099 [2024-12-09 10:09:57.132770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.132821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:50.099 [2024-12-09 10:09:57.132843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.529 ms 00:58:50.099 [2024-12-09 10:09:57.132855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.132904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.132920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:58:50.099 [2024-12-09 10:09:57.132935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:58:50.099 [2024-12-09 10:09:57.132946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.133003] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:58:50.099 [2024-12-09 10:09:57.133172] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:58:50.099 [2024-12-09 10:09:57.133197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:58:50.099 [2024-12-09 10:09:57.133214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:58:50.099 [2024-12-09 10:09:57.133231] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:58:50.099 [2024-12-09 10:09:57.133245] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:58:50.099 [2024-12-09 10:09:57.133299] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:58:50.099 [2024-12-09 10:09:57.133311] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:58:50.099 [2024-12-09 10:09:57.133338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:58:50.099 [2024-12-09 10:09:57.133351] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:58:50.099 [2024-12-09 10:09:57.133366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.133390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:58:50.099 [2024-12-09 10:09:57.133405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:58:50.099 [2024-12-09 10:09:57.133418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.133520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.099 [2024-12-09 10:09:57.133535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:58:50.099 [2024-12-09 10:09:57.133550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:58:50.099 [2024-12-09 10:09:57.133561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.099 [2024-12-09 10:09:57.133685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:58:50.100 [2024-12-09 10:09:57.133710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:58:50.100 [2024-12-09 10:09:57.133726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:50.100 [2024-12-09 10:09:57.133738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.133752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:58:50.100 [2024-12-09 10:09:57.133763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.133776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:58:50.100 [2024-12-09 10:09:57.133787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:58:50.100 [2024-12-09 10:09:57.133799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:58:50.100 [2024-12-09 10:09:57.133819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:50.100 [2024-12-09 10:09:57.133834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:58:50.100 [2024-12-09 10:09:57.133845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:58:50.100 [2024-12-09 10:09:57.133857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:58:50.100 [2024-12-09 10:09:57.133882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:58:50.100 [2024-12-09 10:09:57.133896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:58:50.100 [2024-12-09 10:09:57.133907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.133922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:58:50.100 [2024-12-09 10:09:57.133933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:58:50.100 [2024-12-09 10:09:57.133945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.133956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:58:50.100 [2024-12-09 10:09:57.133969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:58:50.100 [2024-12-09 10:09:57.133980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:50.100 [2024-12-09 10:09:57.133993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:58:50.100 [2024-12-09 10:09:57.134004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:50.100 [2024-12-09 10:09:57.134027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:58:50.100 [2024-12-09 10:09:57.134040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:50.100 [2024-12-09 10:09:57.134063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:58:50.100 [2024-12-09 10:09:57.134074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:58:50.100 [2024-12-09 10:09:57.134096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:58:50.100 [2024-12-09 10:09:57.134112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:50.100 [2024-12-09 10:09:57.134135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:58:50.100 [2024-12-09 10:09:57.134146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:58:50.100 [2024-12-09 10:09:57.134161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:58:50.100 [2024-12-09 10:09:57.134175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:58:50.100 [2024-12-09 10:09:57.134188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:58:50.100 [2024-12-09 10:09:57.134198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:58:50.100 [2024-12-09 10:09:57.134222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:58:50.100 [2024-12-09 10:09:57.134234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134245] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:58:50.100 [2024-12-09 10:09:57.134274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:58:50.100 [2024-12-09 10:09:57.134287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:58:50.100 [2024-12-09 10:09:57.134301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:58:50.100 [2024-12-09 10:09:57.134313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:58:50.100 [2024-12-09 10:09:57.134329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:58:50.100 [2024-12-09 10:09:57.134340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:58:50.100 [2024-12-09 10:09:57.134354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:58:50.100 [2024-12-09 10:09:57.134365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:58:50.100 [2024-12-09 10:09:57.134378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:58:50.100 [2024-12-09 10:09:57.134392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:58:50.100 [2024-12-09 10:09:57.134412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:58:50.100 [2024-12-09 10:09:57.134439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:58:50.100 [2024-12-09 10:09:57.134457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:58:50.100 [2024-12-09 10:09:57.134471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:58:50.100 [2024-12-09 10:09:57.134483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:58:50.100 [2024-12-09 10:09:57.134497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:58:50.100 [2024-12-09 10:09:57.134508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:58:50.100 [2024-12-09 10:09:57.134524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:58:50.100 [2024-12-09 10:09:57.134543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:58:50.100 [2024-12-09 10:09:57.134559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:58:50.100 [2024-12-09 10:09:57.134623] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:58:50.100 [2024-12-09 10:09:57.134638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:50.100 [2024-12-09 10:09:57.134664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:58:50.100 [2024-12-09 10:09:57.134676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:58:50.100 [2024-12-09 10:09:57.134690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:58:50.100 [2024-12-09 10:09:57.134703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:50.100 [2024-12-09 10:09:57.134717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:58:50.100 [2024-12-09 10:09:57.134729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:58:50.100 [2024-12-09 10:09:57.134742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:50.100 [2024-12-09 10:09:57.134800] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:58:50.100 [2024-12-09 10:09:57.134823] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:58:54.291 [2024-12-09 10:10:00.848778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.291 [2024-12-09 10:10:00.848865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:58:54.291 [2024-12-09 10:10:00.848888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3713.991 ms 00:58:54.291 [2024-12-09 10:10:00.848904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.291 [2024-12-09 10:10:00.888498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.291 [2024-12-09 10:10:00.888571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:54.291 [2024-12-09 10:10:00.888594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.318 ms 00:58:54.291 [2024-12-09 10:10:00.888611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.291 [2024-12-09 10:10:00.888801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.291 [2024-12-09 10:10:00.888826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:58:54.291 [2024-12-09 10:10:00.888851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:58:54.291 [2024-12-09 10:10:00.888872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.291 [2024-12-09 10:10:00.934578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.291 [2024-12-09 10:10:00.934648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:54.291 [2024-12-09 10:10:00.934670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.628 ms 00:58:54.291 [2024-12-09 10:10:00.934686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.291 [2024-12-09 10:10:00.934767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.291 [2024-12-09 10:10:00.934807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:54.291 [2024-12-09 10:10:00.934821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:58:54.291 [2024-12-09 10:10:00.934848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.291 [2024-12-09 10:10:00.935530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.291 [2024-12-09 10:10:00.935555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:54.292 [2024-12-09 10:10:00.935569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:58:54.292 [2024-12-09 10:10:00.935583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:00.935734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:00.935759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:54.292 [2024-12-09 10:10:00.935776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:58:54.292 [2024-12-09 10:10:00.935793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:00.957521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:00.957585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:54.292 [2024-12-09 10:10:00.957605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.700 ms 00:58:54.292 [2024-12-09 10:10:00.957620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:00.984145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:58:54.292 [2024-12-09 10:10:00.988795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:00.988976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:58:54.292 [2024-12-09 10:10:00.989013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.045 ms 00:58:54.292 [2024-12-09 10:10:00.989028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.085637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.085707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:58:54.292 [2024-12-09 10:10:01.085734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.544 ms 00:58:54.292 [2024-12-09 10:10:01.085747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.086004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.086032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:58:54.292 [2024-12-09 10:10:01.086053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:58:54.292 [2024-12-09 10:10:01.086065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.117506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.117563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:58:54.292 [2024-12-09 10:10:01.117587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.354 ms 00:58:54.292 [2024-12-09 10:10:01.117601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.147728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.147935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:58:54.292 [2024-12-09 10:10:01.147972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.077 ms 00:58:54.292 [2024-12-09 10:10:01.147985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.148851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.148884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:58:54.292 [2024-12-09 10:10:01.148903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:58:54.292 [2024-12-09 10:10:01.148918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.245459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.245524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:58:54.292 [2024-12-09 10:10:01.245553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.462 ms 00:58:54.292 [2024-12-09 10:10:01.245567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.278768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.278980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:58:54.292 [2024-12-09 10:10:01.279024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.081 ms 00:58:54.292 [2024-12-09 10:10:01.279038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.292 [2024-12-09 10:10:01.310852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.292 [2024-12-09 10:10:01.310911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:58:54.292 [2024-12-09 10:10:01.310935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.744 ms 00:58:54.292 [2024-12-09 10:10:01.310947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.551 [2024-12-09 10:10:01.342075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.551 [2024-12-09 10:10:01.342292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:58:54.551 [2024-12-09 10:10:01.342328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.071 ms 00:58:54.551 [2024-12-09 10:10:01.342342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.551 [2024-12-09 10:10:01.342409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.551 [2024-12-09 10:10:01.342430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:58:54.551 [2024-12-09 10:10:01.342450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:58:54.551 [2024-12-09 10:10:01.342462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.551 [2024-12-09 10:10:01.342615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:54.551 [2024-12-09 10:10:01.342643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:58:54.551 [2024-12-09 10:10:01.342659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:58:54.551 [2024-12-09 10:10:01.342671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:54.551 [2024-12-09 10:10:01.343967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4230.630 ms, result 0 00:58:54.551 { 00:58:54.551 "name": "ftl0", 00:58:54.551 "uuid": "5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb" 00:58:54.551 } 00:58:54.551 10:10:01 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:58:54.551 10:10:01 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:58:54.810 10:10:01 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:58:54.810 10:10:01 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:58:55.068 [2024-12-09 10:10:01.939352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.068 [2024-12-09 10:10:01.939447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:58:55.068 [2024-12-09 10:10:01.939469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:58:55.068 [2024-12-09 10:10:01.939485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.068 [2024-12-09 10:10:01.939521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:58:55.068 [2024-12-09 10:10:01.943211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.068 [2024-12-09 10:10:01.943255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:58:55.068 [2024-12-09 10:10:01.943275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:58:55.068 [2024-12-09 10:10:01.943288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.068 [2024-12-09 10:10:01.943650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.068 [2024-12-09 10:10:01.943679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:58:55.068 [2024-12-09 10:10:01.943696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:58:55.068 [2024-12-09 10:10:01.943707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.068 [2024-12-09 10:10:01.946893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.068 [2024-12-09 10:10:01.947061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:58:55.068 [2024-12-09 10:10:01.947093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:58:55.068 [2024-12-09 10:10:01.947107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.068 [2024-12-09 10:10:01.953633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.068 [2024-12-09 10:10:01.953778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:58:55.068 [2024-12-09 10:10:01.953814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:58:55.068 [2024-12-09 10:10:01.953827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.068 [2024-12-09 10:10:01.985784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.068 [2024-12-09 10:10:01.985845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:58:55.068 [2024-12-09 10:10:01.985888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.856 ms 00:58:55.068 [2024-12-09 10:10:01.985902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.068 [2024-12-09 10:10:02.006108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.069 [2024-12-09 10:10:02.006156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:58:55.069 [2024-12-09 10:10:02.006178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.144 ms 00:58:55.069 [2024-12-09 10:10:02.006191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.069 [2024-12-09 10:10:02.006412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.069 [2024-12-09 10:10:02.006435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:58:55.069 [2024-12-09 10:10:02.006451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:58:55.069 [2024-12-09 10:10:02.006463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.069 [2024-12-09 10:10:02.037276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.069 [2024-12-09 10:10:02.037321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:58:55.069 [2024-12-09 10:10:02.037343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.779 ms 00:58:55.069 [2024-12-09 10:10:02.037355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.069 [2024-12-09 10:10:02.067518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.069 [2024-12-09 10:10:02.067561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:58:55.069 [2024-12-09 10:10:02.067582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.108 ms 00:58:55.069 [2024-12-09 10:10:02.067595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.069 [2024-12-09 10:10:02.097612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.069 [2024-12-09 10:10:02.097654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:58:55.069 [2024-12-09 10:10:02.097675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.960 ms 00:58:55.069 [2024-12-09 10:10:02.097687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.329 [2024-12-09 10:10:02.127885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.329 [2024-12-09 10:10:02.127930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:58:55.329 [2024-12-09 10:10:02.127950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.073 ms 00:58:55.329 [2024-12-09 10:10:02.127962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.329 [2024-12-09 10:10:02.128017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:58:55.329 [2024-12-09 10:10:02.128042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:58:55.329 [2024-12-09 10:10:02.128764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.128987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:58:55.330 [2024-12-09 10:10:02.129551] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:58:55.330 [2024-12-09 10:10:02.129565] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb 00:58:55.330 [2024-12-09 10:10:02.129578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:58:55.330 [2024-12-09 10:10:02.129594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:58:55.330 [2024-12-09 10:10:02.129609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:58:55.330 [2024-12-09 10:10:02.129623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:58:55.330 [2024-12-09 10:10:02.129634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:58:55.330 [2024-12-09 10:10:02.129648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:58:55.330 [2024-12-09 10:10:02.129660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:58:55.330 [2024-12-09 10:10:02.129673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:58:55.330 [2024-12-09 10:10:02.129683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:58:55.330 [2024-12-09 10:10:02.129696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.330 [2024-12-09 10:10:02.129708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:58:55.330 [2024-12-09 10:10:02.129723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:58:55.330 [2024-12-09 10:10:02.129737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.146833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.330 [2024-12-09 10:10:02.147014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:58:55.330 [2024-12-09 10:10:02.147048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.988 ms 00:58:55.330 [2024-12-09 10:10:02.147063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.147588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:58:55.330 [2024-12-09 10:10:02.147613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:58:55.330 [2024-12-09 10:10:02.147634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:58:55.330 [2024-12-09 10:10:02.147646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.204023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.330 [2024-12-09 10:10:02.204085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:58:55.330 [2024-12-09 10:10:02.204121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.330 [2024-12-09 10:10:02.204133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.204235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.330 [2024-12-09 10:10:02.204274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:58:55.330 [2024-12-09 10:10:02.204297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.330 [2024-12-09 10:10:02.204325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.204454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.330 [2024-12-09 10:10:02.204475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:58:55.330 [2024-12-09 10:10:02.204497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.330 [2024-12-09 10:10:02.204509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.204542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.330 [2024-12-09 10:10:02.204556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:58:55.330 [2024-12-09 10:10:02.204574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.330 [2024-12-09 10:10:02.204588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.330 [2024-12-09 10:10:02.314331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.330 [2024-12-09 10:10:02.314536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:58:55.330 [2024-12-09 10:10:02.314573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.330 [2024-12-09 10:10:02.314594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.400372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.400448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:58:55.590 [2024-12-09 10:10:02.400472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.400488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.400632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.400652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:58:55.590 [2024-12-09 10:10:02.400667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.400679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.400757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.400776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:58:55.590 [2024-12-09 10:10:02.400791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.400803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.400947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.400967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:58:55.590 [2024-12-09 10:10:02.400983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.400994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.401050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.401068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:58:55.590 [2024-12-09 10:10:02.401083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.401094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.401151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.401167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:58:55.590 [2024-12-09 10:10:02.401181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.401193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.401288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:58:55.590 [2024-12-09 10:10:02.401309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:58:55.590 [2024-12-09 10:10:02.401324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:58:55.590 [2024-12-09 10:10:02.401357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:58:55.590 [2024-12-09 10:10:02.401530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 462.143 ms, result 0 00:58:55.590 true 00:58:55.590 10:10:02 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79453 00:58:55.590 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79453 ']' 00:58:55.590 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79453 00:58:55.590 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:58:55.590 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:55.590 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79453 00:58:55.590 killing process with pid 79453 00:58:55.591 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:55.591 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:55.591 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79453' 00:58:55.591 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79453 00:58:55.591 10:10:02 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79453 00:58:59.781 10:10:06 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:59:06.348 262144+0 records in 00:59:06.348 262144+0 records out 00:59:06.348 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.41493 s, 198 MB/s 00:59:06.348 10:10:12 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:59:07.726 10:10:14 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:59:07.726 [2024-12-09 10:10:14.481502] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:59:07.726 [2024-12-09 10:10:14.481681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79720 ] 00:59:07.726 [2024-12-09 10:10:14.669832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:07.985 [2024-12-09 10:10:14.831679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:08.244 [2024-12-09 10:10:15.272593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:59:08.244 [2024-12-09 10:10:15.272693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:59:08.560 [2024-12-09 10:10:15.446210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.446306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:59:08.560 [2024-12-09 10:10:15.446328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:59:08.560 [2024-12-09 10:10:15.446340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.446416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.446443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:59:08.560 [2024-12-09 10:10:15.446456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:59:08.560 [2024-12-09 10:10:15.446468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.446501] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:59:08.560 [2024-12-09 10:10:15.447420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:59:08.560 [2024-12-09 10:10:15.447457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.447472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:59:08.560 [2024-12-09 10:10:15.447485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:59:08.560 [2024-12-09 10:10:15.447496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.449481] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:59:08.560 [2024-12-09 10:10:15.468244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.468457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:59:08.560 [2024-12-09 10:10:15.468490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.764 ms 00:59:08.560 [2024-12-09 10:10:15.468504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.468609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.468630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:59:08.560 [2024-12-09 10:10:15.468643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:59:08.560 [2024-12-09 10:10:15.468654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.478305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.478356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:59:08.560 [2024-12-09 10:10:15.478373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.542 ms 00:59:08.560 [2024-12-09 10:10:15.478418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.478550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.478570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:59:08.560 [2024-12-09 10:10:15.478583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:59:08.560 [2024-12-09 10:10:15.478595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.478661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.478679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:59:08.560 [2024-12-09 10:10:15.478692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:59:08.560 [2024-12-09 10:10:15.478703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.478776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:59:08.560 [2024-12-09 10:10:15.484036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.484210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:59:08.560 [2024-12-09 10:10:15.484246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.270 ms 00:59:08.560 [2024-12-09 10:10:15.484285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.560 [2024-12-09 10:10:15.484344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.560 [2024-12-09 10:10:15.484363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:59:08.561 [2024-12-09 10:10:15.484377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:59:08.561 [2024-12-09 10:10:15.484389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.561 [2024-12-09 10:10:15.484438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:59:08.561 [2024-12-09 10:10:15.484474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:59:08.561 [2024-12-09 10:10:15.484518] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:59:08.561 [2024-12-09 10:10:15.484544] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:59:08.561 [2024-12-09 10:10:15.484655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:59:08.561 [2024-12-09 10:10:15.484671] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:59:08.561 [2024-12-09 10:10:15.484687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:59:08.561 [2024-12-09 10:10:15.484702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:59:08.561 [2024-12-09 10:10:15.484716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:59:08.561 [2024-12-09 10:10:15.484729] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:59:08.561 [2024-12-09 10:10:15.484740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:59:08.561 [2024-12-09 10:10:15.484756] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:59:08.561 [2024-12-09 10:10:15.484767] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:59:08.561 [2024-12-09 10:10:15.484780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.561 [2024-12-09 10:10:15.484792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:59:08.561 [2024-12-09 10:10:15.484803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:59:08.561 [2024-12-09 10:10:15.484814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.561 [2024-12-09 10:10:15.484911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.561 [2024-12-09 10:10:15.484927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:59:08.561 [2024-12-09 10:10:15.484940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:59:08.561 [2024-12-09 10:10:15.484951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.561 [2024-12-09 10:10:15.485073] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:59:08.561 [2024-12-09 10:10:15.485093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:59:08.561 [2024-12-09 10:10:15.485106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:59:08.561 [2024-12-09 10:10:15.485141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:59:08.561 [2024-12-09 10:10:15.485175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:59:08.561 [2024-12-09 10:10:15.485197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:59:08.561 [2024-12-09 10:10:15.485209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:59:08.561 [2024-12-09 10:10:15.485220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:59:08.561 [2024-12-09 10:10:15.485244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:59:08.561 [2024-12-09 10:10:15.485275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:59:08.561 [2024-12-09 10:10:15.485287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:59:08.561 [2024-12-09 10:10:15.485309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:59:08.561 [2024-12-09 10:10:15.485341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:59:08.561 [2024-12-09 10:10:15.485372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:59:08.561 [2024-12-09 10:10:15.485409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:59:08.561 [2024-12-09 10:10:15.485440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:59:08.561 [2024-12-09 10:10:15.485472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:59:08.561 [2024-12-09 10:10:15.485493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:59:08.561 [2024-12-09 10:10:15.485503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:59:08.561 [2024-12-09 10:10:15.485514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:59:08.561 [2024-12-09 10:10:15.485525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:59:08.561 [2024-12-09 10:10:15.485536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:59:08.561 [2024-12-09 10:10:15.485546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:59:08.561 [2024-12-09 10:10:15.485567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:59:08.561 [2024-12-09 10:10:15.485579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:59:08.561 [2024-12-09 10:10:15.485604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:59:08.561 [2024-12-09 10:10:15.485625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:08.561 [2024-12-09 10:10:15.485649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:59:08.561 [2024-12-09 10:10:15.485660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:59:08.561 [2024-12-09 10:10:15.485671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:59:08.561 [2024-12-09 10:10:15.485682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:59:08.561 [2024-12-09 10:10:15.485692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:59:08.561 [2024-12-09 10:10:15.485703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:59:08.561 [2024-12-09 10:10:15.485715] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:59:08.561 [2024-12-09 10:10:15.485729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:59:08.561 [2024-12-09 10:10:15.485747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:59:08.561 [2024-12-09 10:10:15.485759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:59:08.561 [2024-12-09 10:10:15.485771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:59:08.561 [2024-12-09 10:10:15.485782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:59:08.562 [2024-12-09 10:10:15.485793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:59:08.562 [2024-12-09 10:10:15.485804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:59:08.562 [2024-12-09 10:10:15.485816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:59:08.562 [2024-12-09 10:10:15.485827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:59:08.562 [2024-12-09 10:10:15.485839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:59:08.562 [2024-12-09 10:10:15.485850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:59:08.562 [2024-12-09 10:10:15.485861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:59:08.562 [2024-12-09 10:10:15.485885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:59:08.562 [2024-12-09 10:10:15.485903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:59:08.562 [2024-12-09 10:10:15.485914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:59:08.562 [2024-12-09 10:10:15.485926] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:59:08.562 [2024-12-09 10:10:15.485939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:59:08.562 [2024-12-09 10:10:15.485953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:59:08.562 [2024-12-09 10:10:15.485965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:59:08.562 [2024-12-09 10:10:15.485976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:59:08.562 [2024-12-09 10:10:15.485996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:59:08.562 [2024-12-09 10:10:15.486009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.562 [2024-12-09 10:10:15.486021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:59:08.562 [2024-12-09 10:10:15.486034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:59:08.562 [2024-12-09 10:10:15.486045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.562 [2024-12-09 10:10:15.533718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.562 [2024-12-09 10:10:15.533786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:59:08.562 [2024-12-09 10:10:15.533808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.601 ms 00:59:08.562 [2024-12-09 10:10:15.533832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.562 [2024-12-09 10:10:15.533970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.562 [2024-12-09 10:10:15.533990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:59:08.562 [2024-12-09 10:10:15.534004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:59:08.562 [2024-12-09 10:10:15.534015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.854 [2024-12-09 10:10:15.599266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.854 [2024-12-09 10:10:15.599616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:59:08.854 [2024-12-09 10:10:15.599648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.128 ms 00:59:08.854 [2024-12-09 10:10:15.599662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.854 [2024-12-09 10:10:15.599738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.854 [2024-12-09 10:10:15.599757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:59:08.854 [2024-12-09 10:10:15.599779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:59:08.854 [2024-12-09 10:10:15.599791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.854 [2024-12-09 10:10:15.600595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.854 [2024-12-09 10:10:15.600635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:59:08.854 [2024-12-09 10:10:15.600650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:59:08.854 [2024-12-09 10:10:15.600677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.854 [2024-12-09 10:10:15.600850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.854 [2024-12-09 10:10:15.600906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:59:08.854 [2024-12-09 10:10:15.600928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:59:08.854 [2024-12-09 10:10:15.600939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.854 [2024-12-09 10:10:15.626098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.854 [2024-12-09 10:10:15.626150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:59:08.854 [2024-12-09 10:10:15.626169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.130 ms 00:59:08.855 [2024-12-09 10:10:15.626181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.647376] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:59:08.855 [2024-12-09 10:10:15.647446] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:59:08.855 [2024-12-09 10:10:15.647485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.647498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:59:08.855 [2024-12-09 10:10:15.647514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.100 ms 00:59:08.855 [2024-12-09 10:10:15.647542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.683509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.683594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:59:08.855 [2024-12-09 10:10:15.683646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.851 ms 00:59:08.855 [2024-12-09 10:10:15.683660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.702454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.702515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:59:08.855 [2024-12-09 10:10:15.702533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.731 ms 00:59:08.855 [2024-12-09 10:10:15.702544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.719895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.719940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:59:08.855 [2024-12-09 10:10:15.719957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.304 ms 00:59:08.855 [2024-12-09 10:10:15.719970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.720906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.721069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:59:08.855 [2024-12-09 10:10:15.721098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:59:08.855 [2024-12-09 10:10:15.721119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.809940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.810045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:59:08.855 [2024-12-09 10:10:15.810070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.784 ms 00:59:08.855 [2024-12-09 10:10:15.810092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.824709] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:59:08.855 [2024-12-09 10:10:15.829158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.829208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:59:08.855 [2024-12-09 10:10:15.829229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.916 ms 00:59:08.855 [2024-12-09 10:10:15.829241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.829436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.829458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:59:08.855 [2024-12-09 10:10:15.829473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:59:08.855 [2024-12-09 10:10:15.829485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.829603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.829623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:59:08.855 [2024-12-09 10:10:15.829636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:59:08.855 [2024-12-09 10:10:15.829648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.829682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.829698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:59:08.855 [2024-12-09 10:10:15.829710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:59:08.855 [2024-12-09 10:10:15.829721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.829767] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:59:08.855 [2024-12-09 10:10:15.829799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.829811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:59:08.855 [2024-12-09 10:10:15.829823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:59:08.855 [2024-12-09 10:10:15.829834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.862375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.862597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:59:08.855 [2024-12-09 10:10:15.862631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.513 ms 00:59:08.855 [2024-12-09 10:10:15.862656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.862756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:08.855 [2024-12-09 10:10:15.862775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:59:08.855 [2024-12-09 10:10:15.862788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:59:08.855 [2024-12-09 10:10:15.862800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:08.855 [2024-12-09 10:10:15.864360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.572 ms, result 0 00:59:10.231  [2024-12-09T10:10:18.211Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-09T10:10:19.147Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-09T10:10:20.083Z] Copying: 79/1024 [MB] (26 MBps) [2024-12-09T10:10:21.018Z] Copying: 105/1024 [MB] (26 MBps) [2024-12-09T10:10:21.955Z] Copying: 131/1024 [MB] (26 MBps) [2024-12-09T10:10:22.890Z] Copying: 157/1024 [MB] (25 MBps) [2024-12-09T10:10:24.368Z] Copying: 181/1024 [MB] (24 MBps) [2024-12-09T10:10:24.940Z] Copying: 208/1024 [MB] (26 MBps) [2024-12-09T10:10:25.876Z] Copying: 234/1024 [MB] (25 MBps) [2024-12-09T10:10:27.283Z] Copying: 260/1024 [MB] (26 MBps) [2024-12-09T10:10:28.219Z] Copying: 287/1024 [MB] (26 MBps) [2024-12-09T10:10:29.156Z] Copying: 312/1024 [MB] (25 MBps) [2024-12-09T10:10:30.093Z] Copying: 339/1024 [MB] (26 MBps) [2024-12-09T10:10:31.029Z] Copying: 363/1024 [MB] (24 MBps) [2024-12-09T10:10:32.006Z] Copying: 387/1024 [MB] (23 MBps) [2024-12-09T10:10:32.942Z] Copying: 412/1024 [MB] (25 MBps) [2024-12-09T10:10:33.878Z] Copying: 438/1024 [MB] (25 MBps) [2024-12-09T10:10:35.253Z] Copying: 463/1024 [MB] (25 MBps) [2024-12-09T10:10:36.188Z] Copying: 487/1024 [MB] (24 MBps) [2024-12-09T10:10:37.124Z] Copying: 513/1024 [MB] (25 MBps) [2024-12-09T10:10:38.060Z] Copying: 539/1024 [MB] (25 MBps) [2024-12-09T10:10:39.000Z] Copying: 564/1024 [MB] (25 MBps) [2024-12-09T10:10:39.936Z] Copying: 587/1024 [MB] (23 MBps) [2024-12-09T10:10:40.878Z] Copying: 609/1024 [MB] (22 MBps) [2024-12-09T10:10:42.284Z] Copying: 632/1024 [MB] (23 MBps) [2024-12-09T10:10:43.220Z] Copying: 657/1024 [MB] (24 MBps) [2024-12-09T10:10:44.155Z] Copying: 683/1024 [MB] (25 MBps) [2024-12-09T10:10:45.091Z] Copying: 709/1024 [MB] (26 MBps) [2024-12-09T10:10:46.025Z] Copying: 734/1024 [MB] (25 MBps) [2024-12-09T10:10:46.959Z] Copying: 760/1024 [MB] (25 MBps) [2024-12-09T10:10:47.893Z] Copying: 786/1024 [MB] (26 MBps) [2024-12-09T10:10:49.269Z] Copying: 811/1024 [MB] (24 MBps) [2024-12-09T10:10:50.203Z] Copying: 835/1024 [MB] (24 MBps) [2024-12-09T10:10:51.137Z] Copying: 860/1024 [MB] (24 MBps) [2024-12-09T10:10:52.072Z] Copying: 886/1024 [MB] (25 MBps) [2024-12-09T10:10:53.007Z] Copying: 912/1024 [MB] (25 MBps) [2024-12-09T10:10:53.942Z] Copying: 935/1024 [MB] (23 MBps) [2024-12-09T10:10:54.878Z] Copying: 960/1024 [MB] (24 MBps) [2024-12-09T10:10:56.254Z] Copying: 983/1024 [MB] (23 MBps) [2024-12-09T10:10:56.837Z] Copying: 1007/1024 [MB] (23 MBps) [2024-12-09T10:10:56.837Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-09 10:10:56.567087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.567208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:59:49.793 [2024-12-09 10:10:56.567261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:59:49.793 [2024-12-09 10:10:56.567273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.567332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:59:49.793 [2024-12-09 10:10:56.571634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.571671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:59:49.793 [2024-12-09 10:10:56.571711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.279 ms 00:59:49.793 [2024-12-09 10:10:56.571721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.573768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.574024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:59:49.793 [2024-12-09 10:10:56.574054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:59:49.793 [2024-12-09 10:10:56.574068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.593408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.593478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:59:49.793 [2024-12-09 10:10:56.593494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.312 ms 00:59:49.793 [2024-12-09 10:10:56.593504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.601296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.601356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:59:49.793 [2024-12-09 10:10:56.601377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.698 ms 00:59:49.793 [2024-12-09 10:10:56.601388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.636968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.637017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:59:49.793 [2024-12-09 10:10:56.637050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.499 ms 00:59:49.793 [2024-12-09 10:10:56.637062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.656871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.656969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:59:49.793 [2024-12-09 10:10:56.656986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.761 ms 00:59:49.793 [2024-12-09 10:10:56.656999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.657159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.657187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:59:49.793 [2024-12-09 10:10:56.657201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:59:49.793 [2024-12-09 10:10:56.657212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.691089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.691282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:59:49.793 [2024-12-09 10:10:56.691311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.854 ms 00:59:49.793 [2024-12-09 10:10:56.691323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.724163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.724211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:59:49.793 [2024-12-09 10:10:56.724243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.792 ms 00:59:49.793 [2024-12-09 10:10:56.724270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.755848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.755904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:59:49.793 [2024-12-09 10:10:56.755921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.517 ms 00:59:49.793 [2024-12-09 10:10:56.755932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.787536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.793 [2024-12-09 10:10:56.787581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:59:49.793 [2024-12-09 10:10:56.787598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.511 ms 00:59:49.793 [2024-12-09 10:10:56.787610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.793 [2024-12-09 10:10:56.787660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:59:49.793 [2024-12-09 10:10:56.787684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.787998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:59:49.793 [2024-12-09 10:10:56.788095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:59:49.794 [2024-12-09 10:10:56.788985] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:59:49.794 [2024-12-09 10:10:56.789004] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb 00:59:49.794 [2024-12-09 10:10:56.789016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:59:49.794 [2024-12-09 10:10:56.789027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:59:49.794 [2024-12-09 10:10:56.789038] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:59:49.794 [2024-12-09 10:10:56.789050] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:59:49.794 [2024-12-09 10:10:56.789061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:59:49.794 [2024-12-09 10:10:56.789085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:59:49.794 [2024-12-09 10:10:56.789097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:59:49.794 [2024-12-09 10:10:56.789107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:59:49.794 [2024-12-09 10:10:56.789117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:59:49.794 [2024-12-09 10:10:56.789129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.794 [2024-12-09 10:10:56.789141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:59:49.794 [2024-12-09 10:10:56.789153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:59:49.794 [2024-12-09 10:10:56.789164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.794 [2024-12-09 10:10:56.806335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.794 [2024-12-09 10:10:56.806519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:59:49.794 [2024-12-09 10:10:56.806548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.121 ms 00:59:49.794 [2024-12-09 10:10:56.806562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:49.794 [2024-12-09 10:10:56.807039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:49.794 [2024-12-09 10:10:56.807064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:59:49.794 [2024-12-09 10:10:56.807078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:59:49.794 [2024-12-09 10:10:56.807098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:56.853161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:56.853224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:59:50.054 [2024-12-09 10:10:56.853244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:56.853276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:56.853393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:56.853416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:59:50.054 [2024-12-09 10:10:56.853429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:56.853450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:56.853585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:56.853607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:59:50.054 [2024-12-09 10:10:56.853621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:56.853633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:56.853657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:56.853672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:59:50.054 [2024-12-09 10:10:56.853684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:56.853695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:56.974551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:56.974849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:59:50.054 [2024-12-09 10:10:56.974896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:56.974910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.069613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.069675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:59:50.054 [2024-12-09 10:10:57.069695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.069715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.069857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.069877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:59:50.054 [2024-12-09 10:10:57.069918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.069930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.069979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.069995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:59:50.054 [2024-12-09 10:10:57.070008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.070020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.070153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.070174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:59:50.054 [2024-12-09 10:10:57.070187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.070199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.070258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.070310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:59:50.054 [2024-12-09 10:10:57.070325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.070336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.070385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.070409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:59:50.054 [2024-12-09 10:10:57.070422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.070433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.070488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:59:50.054 [2024-12-09 10:10:57.070505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:59:50.054 [2024-12-09 10:10:57.070518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:59:50.054 [2024-12-09 10:10:57.070529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:50.054 [2024-12-09 10:10:57.070684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.553 ms, result 0 00:59:51.958 00:59:51.958 00:59:51.958 10:10:58 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:59:51.959 [2024-12-09 10:10:58.737791] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:59:51.959 [2024-12-09 10:10:58.738014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80156 ] 00:59:51.959 [2024-12-09 10:10:58.932773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:52.217 [2024-12-09 10:10:59.081970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:52.476 [2024-12-09 10:10:59.486265] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:59:52.476 [2024-12-09 10:10:59.486342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:59:52.736 [2024-12-09 10:10:59.658354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.736 [2024-12-09 10:10:59.658425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:59:52.736 [2024-12-09 10:10:59.658445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:59:52.736 [2024-12-09 10:10:59.658458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.736 [2024-12-09 10:10:59.658585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.736 [2024-12-09 10:10:59.658621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:59:52.736 [2024-12-09 10:10:59.658634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:59:52.736 [2024-12-09 10:10:59.658645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.736 [2024-12-09 10:10:59.658675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:59:52.736 [2024-12-09 10:10:59.659698] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:59:52.736 [2024-12-09 10:10:59.659733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.736 [2024-12-09 10:10:59.659761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:59:52.736 [2024-12-09 10:10:59.659774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:59:52.736 [2024-12-09 10:10:59.659785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.662510] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:59:52.737 [2024-12-09 10:10:59.680739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.680783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:59:52.737 [2024-12-09 10:10:59.680832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.231 ms 00:59:52.737 [2024-12-09 10:10:59.680844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.680997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.681028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:59:52.737 [2024-12-09 10:10:59.681041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:59:52.737 [2024-12-09 10:10:59.681053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.691628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.691668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:59:52.737 [2024-12-09 10:10:59.691700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.478 ms 00:59:52.737 [2024-12-09 10:10:59.691717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.691813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.691832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:59:52.737 [2024-12-09 10:10:59.691846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:59:52.737 [2024-12-09 10:10:59.691857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.691962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.691981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:59:52.737 [2024-12-09 10:10:59.692009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:59:52.737 [2024-12-09 10:10:59.692020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.692059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:59:52.737 [2024-12-09 10:10:59.697705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.697742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:59:52.737 [2024-12-09 10:10:59.697778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.654 ms 00:59:52.737 [2024-12-09 10:10:59.697821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.697862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.697890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:59:52.737 [2024-12-09 10:10:59.697922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:59:52.737 [2024-12-09 10:10:59.697934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.697985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:59:52.737 [2024-12-09 10:10:59.698020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:59:52.737 [2024-12-09 10:10:59.698063] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:59:52.737 [2024-12-09 10:10:59.698099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:59:52.737 [2024-12-09 10:10:59.698226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:59:52.737 [2024-12-09 10:10:59.698259] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:59:52.737 [2024-12-09 10:10:59.698274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:59:52.737 [2024-12-09 10:10:59.698307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:59:52.737 [2024-12-09 10:10:59.698330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:59:52.737 [2024-12-09 10:10:59.698352] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:59:52.737 [2024-12-09 10:10:59.698364] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:59:52.737 [2024-12-09 10:10:59.698396] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:59:52.737 [2024-12-09 10:10:59.698407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:59:52.737 [2024-12-09 10:10:59.698419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.698430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:59:52.737 [2024-12-09 10:10:59.698442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:59:52.737 [2024-12-09 10:10:59.698452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.698580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.737 [2024-12-09 10:10:59.698634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:59:52.737 [2024-12-09 10:10:59.698662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:59:52.737 [2024-12-09 10:10:59.698673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.737 [2024-12-09 10:10:59.698827] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:59:52.737 [2024-12-09 10:10:59.698856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:59:52.737 [2024-12-09 10:10:59.698877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:59:52.737 [2024-12-09 10:10:59.698904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.698915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:59:52.737 [2024-12-09 10:10:59.698925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.698936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:59:52.737 [2024-12-09 10:10:59.698946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:59:52.737 [2024-12-09 10:10:59.698956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:59:52.737 [2024-12-09 10:10:59.698980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:59:52.737 [2024-12-09 10:10:59.698990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:59:52.737 [2024-12-09 10:10:59.699000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:59:52.737 [2024-12-09 10:10:59.699010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:59:52.737 [2024-12-09 10:10:59.699033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:59:52.737 [2024-12-09 10:10:59.699044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:59:52.737 [2024-12-09 10:10:59.699054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:59:52.737 [2024-12-09 10:10:59.699075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:59:52.737 [2024-12-09 10:10:59.699106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:59:52.737 [2024-12-09 10:10:59.699169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:59:52.737 [2024-12-09 10:10:59.699199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:59:52.737 [2024-12-09 10:10:59.699228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:59:52.737 [2024-12-09 10:10:59.699304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:59:52.737 [2024-12-09 10:10:59.699325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:59:52.737 [2024-12-09 10:10:59.699335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:59:52.737 [2024-12-09 10:10:59.699346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:59:52.737 [2024-12-09 10:10:59.699372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:59:52.737 [2024-12-09 10:10:59.699385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:59:52.737 [2024-12-09 10:10:59.699395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:59:52.737 [2024-12-09 10:10:59.699416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:59:52.737 [2024-12-09 10:10:59.699432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699443] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:59:52.737 [2024-12-09 10:10:59.699455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:59:52.737 [2024-12-09 10:10:59.699481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:59:52.737 [2024-12-09 10:10:59.699504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:59:52.737 [2024-12-09 10:10:59.699515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:59:52.737 [2024-12-09 10:10:59.699526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:59:52.737 [2024-12-09 10:10:59.699537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:59:52.737 [2024-12-09 10:10:59.699547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:59:52.737 [2024-12-09 10:10:59.699557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:59:52.737 [2024-12-09 10:10:59.699570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:59:52.737 [2024-12-09 10:10:59.699599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:59:52.737 [2024-12-09 10:10:59.699617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:59:52.738 [2024-12-09 10:10:59.699628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:59:52.738 [2024-12-09 10:10:59.699641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:59:52.738 [2024-12-09 10:10:59.699667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:59:52.738 [2024-12-09 10:10:59.699678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:59:52.738 [2024-12-09 10:10:59.699689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:59:52.738 [2024-12-09 10:10:59.699700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:59:52.738 [2024-12-09 10:10:59.699725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:59:52.738 [2024-12-09 10:10:59.699757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:59:52.738 [2024-12-09 10:10:59.699787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:59:52.738 [2024-12-09 10:10:59.699798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:59:52.738 [2024-12-09 10:10:59.699809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:59:52.738 [2024-12-09 10:10:59.699836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:59:52.738 [2024-12-09 10:10:59.699847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:59:52.738 [2024-12-09 10:10:59.699858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:59:52.738 [2024-12-09 10:10:59.699870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:59:52.738 [2024-12-09 10:10:59.699882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:59:52.738 [2024-12-09 10:10:59.699893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:59:52.738 [2024-12-09 10:10:59.699904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:59:52.738 [2024-12-09 10:10:59.699915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:59:52.738 [2024-12-09 10:10:59.699927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.738 [2024-12-09 10:10:59.699938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:59:52.738 [2024-12-09 10:10:59.699951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:59:52.738 [2024-12-09 10:10:59.699962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.738 [2024-12-09 10:10:59.747614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.738 [2024-12-09 10:10:59.747677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:59:52.738 [2024-12-09 10:10:59.747697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.567 ms 00:59:52.738 [2024-12-09 10:10:59.747716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:52.738 [2024-12-09 10:10:59.747870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:52.738 [2024-12-09 10:10:59.747895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:59:52.738 [2024-12-09 10:10:59.747911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:59:52.738 [2024-12-09 10:10:59.747922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.813326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.813382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:59:53.028 [2024-12-09 10:10:59.813402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.305 ms 00:59:53.028 [2024-12-09 10:10:59.813416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.813500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.813519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:59:53.028 [2024-12-09 10:10:59.813539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:59:53.028 [2024-12-09 10:10:59.813551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.814318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.814354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:59:53.028 [2024-12-09 10:10:59.814370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:59:53.028 [2024-12-09 10:10:59.814383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.814574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.814601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:59:53.028 [2024-12-09 10:10:59.814623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:59:53.028 [2024-12-09 10:10:59.814651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.835897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.835947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:59:53.028 [2024-12-09 10:10:59.835980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.200 ms 00:59:53.028 [2024-12-09 10:10:59.835993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.855146] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:59:53.028 [2024-12-09 10:10:59.855193] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:59:53.028 [2024-12-09 10:10:59.855237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.855251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:59:53.028 [2024-12-09 10:10:59.855276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.053 ms 00:59:53.028 [2024-12-09 10:10:59.855292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.889601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.889678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:59:53.028 [2024-12-09 10:10:59.889697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.190 ms 00:59:53.028 [2024-12-09 10:10:59.889710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.907720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.907778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:59:53.028 [2024-12-09 10:10:59.907811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.929 ms 00:59:53.028 [2024-12-09 10:10:59.907838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.925637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.925681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:59:53.028 [2024-12-09 10:10:59.925713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.738 ms 00:59:53.028 [2024-12-09 10:10:59.925725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:10:59.926719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:10:59.926770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:59:53.028 [2024-12-09 10:10:59.926806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:59:53.028 [2024-12-09 10:10:59.926819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:11:00.013599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:11:00.013681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:59:53.028 [2024-12-09 10:11:00.013724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.745 ms 00:59:53.028 [2024-12-09 10:11:00.013752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:11:00.026966] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:59:53.028 [2024-12-09 10:11:00.030008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:11:00.030044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:59:53.028 [2024-12-09 10:11:00.030060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.186 ms 00:59:53.028 [2024-12-09 10:11:00.030073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:11:00.030179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:11:00.030208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:59:53.028 [2024-12-09 10:11:00.030229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:59:53.028 [2024-12-09 10:11:00.030240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:11:00.030391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:11:00.030433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:59:53.028 [2024-12-09 10:11:00.030448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:59:53.028 [2024-12-09 10:11:00.030460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:11:00.030496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:11:00.030512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:59:53.028 [2024-12-09 10:11:00.030525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:59:53.028 [2024-12-09 10:11:00.030537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.028 [2024-12-09 10:11:00.030588] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:59:53.028 [2024-12-09 10:11:00.030607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.028 [2024-12-09 10:11:00.030619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:59:53.028 [2024-12-09 10:11:00.030633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:59:53.028 [2024-12-09 10:11:00.030645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.307 [2024-12-09 10:11:00.064510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.307 [2024-12-09 10:11:00.064558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:59:53.307 [2024-12-09 10:11:00.064584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.839 ms 00:59:53.307 [2024-12-09 10:11:00.064597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.307 [2024-12-09 10:11:00.064713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:59:53.307 [2024-12-09 10:11:00.064733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:59:53.307 [2024-12-09 10:11:00.064746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:59:53.307 [2024-12-09 10:11:00.064757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:59:53.307 [2024-12-09 10:11:00.066293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.299 ms, result 0 00:59:54.686  [2024-12-09T10:11:02.665Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-09T10:11:03.602Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-09T10:11:04.538Z] Copying: 73/1024 [MB] (23 MBps) [2024-12-09T10:11:05.472Z] Copying: 94/1024 [MB] (20 MBps) [2024-12-09T10:11:06.407Z] Copying: 116/1024 [MB] (22 MBps) [2024-12-09T10:11:07.343Z] Copying: 139/1024 [MB] (22 MBps) [2024-12-09T10:11:08.341Z] Copying: 160/1024 [MB] (21 MBps) [2024-12-09T10:11:09.717Z] Copying: 186/1024 [MB] (25 MBps) [2024-12-09T10:11:10.654Z] Copying: 212/1024 [MB] (26 MBps) [2024-12-09T10:11:11.591Z] Copying: 235/1024 [MB] (23 MBps) [2024-12-09T10:11:12.526Z] Copying: 259/1024 [MB] (23 MBps) [2024-12-09T10:11:13.461Z] Copying: 283/1024 [MB] (23 MBps) [2024-12-09T10:11:14.408Z] Copying: 306/1024 [MB] (23 MBps) [2024-12-09T10:11:15.345Z] Copying: 330/1024 [MB] (23 MBps) [2024-12-09T10:11:16.723Z] Copying: 353/1024 [MB] (23 MBps) [2024-12-09T10:11:17.658Z] Copying: 376/1024 [MB] (22 MBps) [2024-12-09T10:11:18.593Z] Copying: 402/1024 [MB] (25 MBps) [2024-12-09T10:11:19.528Z] Copying: 426/1024 [MB] (24 MBps) [2024-12-09T10:11:20.460Z] Copying: 450/1024 [MB] (23 MBps) [2024-12-09T10:11:21.395Z] Copying: 474/1024 [MB] (24 MBps) [2024-12-09T10:11:22.332Z] Copying: 499/1024 [MB] (24 MBps) [2024-12-09T10:11:23.709Z] Copying: 524/1024 [MB] (24 MBps) [2024-12-09T10:11:24.644Z] Copying: 549/1024 [MB] (24 MBps) [2024-12-09T10:11:25.581Z] Copying: 573/1024 [MB] (24 MBps) [2024-12-09T10:11:26.516Z] Copying: 596/1024 [MB] (22 MBps) [2024-12-09T10:11:27.473Z] Copying: 618/1024 [MB] (22 MBps) [2024-12-09T10:11:28.408Z] Copying: 640/1024 [MB] (21 MBps) [2024-12-09T10:11:29.345Z] Copying: 663/1024 [MB] (22 MBps) [2024-12-09T10:11:30.722Z] Copying: 685/1024 [MB] (22 MBps) [2024-12-09T10:11:31.657Z] Copying: 710/1024 [MB] (24 MBps) [2024-12-09T10:11:32.594Z] Copying: 735/1024 [MB] (25 MBps) [2024-12-09T10:11:33.537Z] Copying: 761/1024 [MB] (25 MBps) [2024-12-09T10:11:34.473Z] Copying: 786/1024 [MB] (24 MBps) [2024-12-09T10:11:35.409Z] Copying: 811/1024 [MB] (25 MBps) [2024-12-09T10:11:36.345Z] Copying: 836/1024 [MB] (25 MBps) [2024-12-09T10:11:37.723Z] Copying: 861/1024 [MB] (24 MBps) [2024-12-09T10:11:38.660Z] Copying: 886/1024 [MB] (25 MBps) [2024-12-09T10:11:39.596Z] Copying: 912/1024 [MB] (25 MBps) [2024-12-09T10:11:40.533Z] Copying: 938/1024 [MB] (26 MBps) [2024-12-09T10:11:41.467Z] Copying: 965/1024 [MB] (26 MBps) [2024-12-09T10:11:42.401Z] Copying: 991/1024 [MB] (25 MBps) [2024-12-09T10:11:42.658Z] Copying: 1016/1024 [MB] (25 MBps) [2024-12-09T10:11:42.917Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 10:11:42.883406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:35.873 [2024-12-09 10:11:42.883486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:00:35.873 [2024-12-09 10:11:42.883508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:00:35.873 [2024-12-09 10:11:42.883520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:35.873 [2024-12-09 10:11:42.883553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:00:35.873 [2024-12-09 10:11:42.887450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:35.873 [2024-12-09 10:11:42.887493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:00:35.873 [2024-12-09 10:11:42.887508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.872 ms 01:00:35.873 [2024-12-09 10:11:42.887520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:35.873 [2024-12-09 10:11:42.887779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:35.873 [2024-12-09 10:11:42.887813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:00:35.873 [2024-12-09 10:11:42.887827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 01:00:35.873 [2024-12-09 10:11:42.887839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:35.873 [2024-12-09 10:11:42.891373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:35.873 [2024-12-09 10:11:42.891414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:00:35.873 [2024-12-09 10:11:42.891428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 01:00:35.873 [2024-12-09 10:11:42.891447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:35.873 [2024-12-09 10:11:42.898151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:35.873 [2024-12-09 10:11:42.898186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:00:35.873 [2024-12-09 10:11:42.898200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.680 ms 01:00:35.873 [2024-12-09 10:11:42.898211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:42.932498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:42.932543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:00:36.133 [2024-12-09 10:11:42.932561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.205 ms 01:00:36.133 [2024-12-09 10:11:42.932572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:42.952461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:42.952512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:00:36.133 [2024-12-09 10:11:42.952529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.833 ms 01:00:36.133 [2024-12-09 10:11:42.952541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:42.952687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:42.952706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:00:36.133 [2024-12-09 10:11:42.952728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 01:00:36.133 [2024-12-09 10:11:42.952740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:42.986221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:42.986275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:00:36.133 [2024-12-09 10:11:42.986293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.458 ms 01:00:36.133 [2024-12-09 10:11:42.986304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:43.018310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:43.018371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:00:36.133 [2024-12-09 10:11:43.018388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.961 ms 01:00:36.133 [2024-12-09 10:11:43.018400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:43.053108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:43.053156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:00:36.133 [2024-12-09 10:11:43.053173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.652 ms 01:00:36.133 [2024-12-09 10:11:43.053185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:43.086707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.133 [2024-12-09 10:11:43.086801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:00:36.133 [2024-12-09 10:11:43.086817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.417 ms 01:00:36.133 [2024-12-09 10:11:43.086844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.133 [2024-12-09 10:11:43.086904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:00:36.133 [2024-12-09 10:11:43.086936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.086955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.086967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.086979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.086991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:00:36.133 [2024-12-09 10:11:43.087553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.087995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:00:36.134 [2024-12-09 10:11:43.088156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:00:36.134 [2024-12-09 10:11:43.088167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb 01:00:36.134 [2024-12-09 10:11:43.088180] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:00:36.134 [2024-12-09 10:11:43.088191] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:00:36.134 [2024-12-09 10:11:43.088202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:00:36.134 [2024-12-09 10:11:43.088214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:00:36.134 [2024-12-09 10:11:43.088239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:00:36.134 [2024-12-09 10:11:43.088264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:00:36.134 [2024-12-09 10:11:43.088277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:00:36.134 [2024-12-09 10:11:43.088288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:00:36.134 [2024-12-09 10:11:43.088304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:00:36.134 [2024-12-09 10:11:43.088315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.134 [2024-12-09 10:11:43.088326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:00:36.134 [2024-12-09 10:11:43.088338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 01:00:36.134 [2024-12-09 10:11:43.088355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.134 [2024-12-09 10:11:43.106285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.134 [2024-12-09 10:11:43.106326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:00:36.134 [2024-12-09 10:11:43.106343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.855 ms 01:00:36.134 [2024-12-09 10:11:43.106354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.134 [2024-12-09 10:11:43.106819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:36.134 [2024-12-09 10:11:43.106843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:00:36.134 [2024-12-09 10:11:43.106864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 01:00:36.134 [2024-12-09 10:11:43.106884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.134 [2024-12-09 10:11:43.155383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.134 [2024-12-09 10:11:43.155439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:00:36.134 [2024-12-09 10:11:43.155456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.134 [2024-12-09 10:11:43.155469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.134 [2024-12-09 10:11:43.155546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.134 [2024-12-09 10:11:43.155562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:00:36.134 [2024-12-09 10:11:43.155581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.134 [2024-12-09 10:11:43.155593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.134 [2024-12-09 10:11:43.155684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.134 [2024-12-09 10:11:43.155704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:00:36.134 [2024-12-09 10:11:43.155716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.134 [2024-12-09 10:11:43.155727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.134 [2024-12-09 10:11:43.155752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.134 [2024-12-09 10:11:43.155767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:00:36.134 [2024-12-09 10:11:43.155778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.134 [2024-12-09 10:11:43.155796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.277834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.277937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:00:36.393 [2024-12-09 10:11:43.277957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.277970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.376635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.376740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:00:36.393 [2024-12-09 10:11:43.376782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.376795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.376914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.376933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:00:36.393 [2024-12-09 10:11:43.376953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.376965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.377016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.377033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:00:36.393 [2024-12-09 10:11:43.377046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.377057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.377196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.377223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:00:36.393 [2024-12-09 10:11:43.377237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.377272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.377327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.377350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:00:36.393 [2024-12-09 10:11:43.377363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.377374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.377427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.377449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:00:36.393 [2024-12-09 10:11:43.377462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.377474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.377526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:00:36.393 [2024-12-09 10:11:43.377543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:00:36.393 [2024-12-09 10:11:43.377555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:00:36.393 [2024-12-09 10:11:43.377567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:36.393 [2024-12-09 10:11:43.377721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.280 ms, result 0 01:00:37.772 01:00:37.772 01:00:37.772 10:11:44 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:00:40.304 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:00:40.304 10:11:47 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 01:00:40.304 [2024-12-09 10:11:47.147669] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:40.304 [2024-12-09 10:11:47.147986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80633 ] 01:00:40.564 [2024-12-09 10:11:47.351606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:40.564 [2024-12-09 10:11:47.516283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:41.133 [2024-12-09 10:11:47.919955] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:00:41.133 [2024-12-09 10:11:47.920032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:00:41.133 [2024-12-09 10:11:48.088763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.088823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:00:41.133 [2024-12-09 10:11:48.088845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:00:41.133 [2024-12-09 10:11:48.088857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.088926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.088950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:00:41.133 [2024-12-09 10:11:48.088964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 01:00:41.133 [2024-12-09 10:11:48.088990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.089061] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:00:41.133 [2024-12-09 10:11:48.090007] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:00:41.133 [2024-12-09 10:11:48.090042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.090057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:00:41.133 [2024-12-09 10:11:48.090070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 01:00:41.133 [2024-12-09 10:11:48.090081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.092387] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:00:41.133 [2024-12-09 10:11:48.111820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.111879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:00:41.133 [2024-12-09 10:11:48.111928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 01:00:41.133 [2024-12-09 10:11:48.111941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.112040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.112061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:00:41.133 [2024-12-09 10:11:48.112074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 01:00:41.133 [2024-12-09 10:11:48.112086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.122714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.122779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:00:41.133 [2024-12-09 10:11:48.122837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.503 ms 01:00:41.133 [2024-12-09 10:11:48.122855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.122994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.123022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:00:41.133 [2024-12-09 10:11:48.123036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 01:00:41.133 [2024-12-09 10:11:48.123052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.123158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.123210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:00:41.133 [2024-12-09 10:11:48.123223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 01:00:41.133 [2024-12-09 10:11:48.123250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.123293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:00:41.133 [2024-12-09 10:11:48.129304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.129367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:00:41.133 [2024-12-09 10:11:48.129453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.020 ms 01:00:41.133 [2024-12-09 10:11:48.129470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.129521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.133 [2024-12-09 10:11:48.129539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:00:41.133 [2024-12-09 10:11:48.129552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:00:41.133 [2024-12-09 10:11:48.129563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.133 [2024-12-09 10:11:48.129631] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:00:41.133 [2024-12-09 10:11:48.129666] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:00:41.133 [2024-12-09 10:11:48.129709] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:00:41.133 [2024-12-09 10:11:48.129734] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:00:41.133 [2024-12-09 10:11:48.129845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:00:41.134 [2024-12-09 10:11:48.129862] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:00:41.134 [2024-12-09 10:11:48.129877] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:00:41.134 [2024-12-09 10:11:48.129903] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:00:41.134 [2024-12-09 10:11:48.129927] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:00:41.134 [2024-12-09 10:11:48.129940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:00:41.134 [2024-12-09 10:11:48.129952] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:00:41.134 [2024-12-09 10:11:48.129968] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:00:41.134 [2024-12-09 10:11:48.129980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:00:41.134 [2024-12-09 10:11:48.129992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.134 [2024-12-09 10:11:48.130004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:00:41.134 [2024-12-09 10:11:48.130017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 01:00:41.134 [2024-12-09 10:11:48.130028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.134 [2024-12-09 10:11:48.130127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.134 [2024-12-09 10:11:48.130145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:00:41.134 [2024-12-09 10:11:48.130158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:00:41.134 [2024-12-09 10:11:48.130176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.134 [2024-12-09 10:11:48.130325] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:00:41.134 [2024-12-09 10:11:48.130353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:00:41.134 [2024-12-09 10:11:48.130368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:00:41.134 [2024-12-09 10:11:48.130405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:00:41.134 [2024-12-09 10:11:48.130480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:00:41.134 [2024-12-09 10:11:48.130502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:00:41.134 [2024-12-09 10:11:48.130513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:00:41.134 [2024-12-09 10:11:48.130523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:00:41.134 [2024-12-09 10:11:48.130548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:00:41.134 [2024-12-09 10:11:48.130561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:00:41.134 [2024-12-09 10:11:48.130572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:00:41.134 [2024-12-09 10:11:48.130594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:00:41.134 [2024-12-09 10:11:48.130627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:00:41.134 [2024-12-09 10:11:48.130659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:00:41.134 [2024-12-09 10:11:48.130692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:00:41.134 [2024-12-09 10:11:48.130723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:00:41.134 [2024-12-09 10:11:48.130756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:00:41.134 [2024-12-09 10:11:48.130778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:00:41.134 [2024-12-09 10:11:48.130789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:00:41.134 [2024-12-09 10:11:48.130800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:00:41.134 [2024-12-09 10:11:48.130814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:00:41.134 [2024-12-09 10:11:48.130826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:00:41.134 [2024-12-09 10:11:48.130837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:00:41.134 [2024-12-09 10:11:48.130859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:00:41.134 [2024-12-09 10:11:48.130870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130880] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:00:41.134 [2024-12-09 10:11:48.130907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:00:41.134 [2024-12-09 10:11:48.130918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:00:41.134 [2024-12-09 10:11:48.130940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:00:41.134 [2024-12-09 10:11:48.130951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:00:41.134 [2024-12-09 10:11:48.130962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:00:41.134 [2024-12-09 10:11:48.130973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:00:41.134 [2024-12-09 10:11:48.130984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:00:41.134 [2024-12-09 10:11:48.130994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:00:41.134 [2024-12-09 10:11:48.131005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:00:41.134 [2024-12-09 10:11:48.131017] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:00:41.134 [2024-12-09 10:11:48.131031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:00:41.134 [2024-12-09 10:11:48.131061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:00:41.134 [2024-12-09 10:11:48.131073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:00:41.134 [2024-12-09 10:11:48.131085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:00:41.134 [2024-12-09 10:11:48.131096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:00:41.134 [2024-12-09 10:11:48.131124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:00:41.134 [2024-12-09 10:11:48.131135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:00:41.134 [2024-12-09 10:11:48.131147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:00:41.134 [2024-12-09 10:11:48.131173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:00:41.134 [2024-12-09 10:11:48.131184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:00:41.134 [2024-12-09 10:11:48.131259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:00:41.134 [2024-12-09 10:11:48.131272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:00:41.134 [2024-12-09 10:11:48.131297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:00:41.135 [2024-12-09 10:11:48.131309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:00:41.135 [2024-12-09 10:11:48.131321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:00:41.135 [2024-12-09 10:11:48.131334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.135 [2024-12-09 10:11:48.131359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:00:41.135 [2024-12-09 10:11:48.131373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 01:00:41.135 [2024-12-09 10:11:48.131384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.178824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.178915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:00:41.394 [2024-12-09 10:11:48.178936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.369 ms 01:00:41.394 [2024-12-09 10:11:48.178954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.179073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.179091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:00:41.394 [2024-12-09 10:11:48.179105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:00:41.394 [2024-12-09 10:11:48.179117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.243201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.243310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:00:41.394 [2024-12-09 10:11:48.243351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.976 ms 01:00:41.394 [2024-12-09 10:11:48.243364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.243432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.243451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:00:41.394 [2024-12-09 10:11:48.243478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:00:41.394 [2024-12-09 10:11:48.243500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.244196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.244224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:00:41.394 [2024-12-09 10:11:48.244238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 01:00:41.394 [2024-12-09 10:11:48.244265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.244443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.244464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:00:41.394 [2024-12-09 10:11:48.244485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 01:00:41.394 [2024-12-09 10:11:48.244497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.267325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.267403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:00:41.394 [2024-12-09 10:11:48.267422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.798 ms 01:00:41.394 [2024-12-09 10:11:48.267434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.284830] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:00:41.394 [2024-12-09 10:11:48.284876] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:00:41.394 [2024-12-09 10:11:48.284896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.284909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:00:41.394 [2024-12-09 10:11:48.284923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.266 ms 01:00:41.394 [2024-12-09 10:11:48.284935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.318592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.318673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:00:41.394 [2024-12-09 10:11:48.318692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.574 ms 01:00:41.394 [2024-12-09 10:11:48.318705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.337537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.337616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:00:41.394 [2024-12-09 10:11:48.337650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.772 ms 01:00:41.394 [2024-12-09 10:11:48.337662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.355661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.355724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:00:41.394 [2024-12-09 10:11:48.355772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.949 ms 01:00:41.394 [2024-12-09 10:11:48.355800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.394 [2024-12-09 10:11:48.356716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.394 [2024-12-09 10:11:48.356757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:00:41.394 [2024-12-09 10:11:48.356779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 01:00:41.394 [2024-12-09 10:11:48.356792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.447093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.447204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:00:41.654 [2024-12-09 10:11:48.447235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.266 ms 01:00:41.654 [2024-12-09 10:11:48.447259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.461506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:00:41.654 [2024-12-09 10:11:48.465086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.465142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:00:41.654 [2024-12-09 10:11:48.465161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.750 ms 01:00:41.654 [2024-12-09 10:11:48.465184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.465358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.465381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:00:41.654 [2024-12-09 10:11:48.465404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:00:41.654 [2024-12-09 10:11:48.465415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.465560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.465593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:00:41.654 [2024-12-09 10:11:48.465609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:00:41.654 [2024-12-09 10:11:48.465630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.465687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.465705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:00:41.654 [2024-12-09 10:11:48.465718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:00:41.654 [2024-12-09 10:11:48.465730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.465781] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:00:41.654 [2024-12-09 10:11:48.465800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.465813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:00:41.654 [2024-12-09 10:11:48.465825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:00:41.654 [2024-12-09 10:11:48.465837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.501566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.501647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:00:41.654 [2024-12-09 10:11:48.501690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.699 ms 01:00:41.654 [2024-12-09 10:11:48.501704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.501837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:00:41.654 [2024-12-09 10:11:48.501860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:00:41.654 [2024-12-09 10:11:48.501874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 01:00:41.654 [2024-12-09 10:11:48.501886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:00:41.654 [2024-12-09 10:11:48.503333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.983 ms, result 0 01:00:42.591  [2024-12-09T10:11:50.572Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-09T10:11:51.947Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-09T10:11:52.882Z] Copying: 74/1024 [MB] (26 MBps) [2024-12-09T10:11:53.818Z] Copying: 100/1024 [MB] (26 MBps) [2024-12-09T10:11:54.750Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-09T10:11:55.683Z] Copying: 152/1024 [MB] (26 MBps) [2024-12-09T10:11:56.617Z] Copying: 178/1024 [MB] (26 MBps) [2024-12-09T10:11:57.567Z] Copying: 204/1024 [MB] (25 MBps) [2024-12-09T10:11:58.942Z] Copying: 230/1024 [MB] (26 MBps) [2024-12-09T10:11:59.877Z] Copying: 256/1024 [MB] (25 MBps) [2024-12-09T10:12:00.816Z] Copying: 282/1024 [MB] (25 MBps) [2024-12-09T10:12:01.751Z] Copying: 308/1024 [MB] (26 MBps) [2024-12-09T10:12:02.685Z] Copying: 335/1024 [MB] (26 MBps) [2024-12-09T10:12:03.621Z] Copying: 362/1024 [MB] (26 MBps) [2024-12-09T10:12:04.557Z] Copying: 388/1024 [MB] (26 MBps) [2024-12-09T10:12:05.932Z] Copying: 414/1024 [MB] (25 MBps) [2024-12-09T10:12:06.866Z] Copying: 441/1024 [MB] (26 MBps) [2024-12-09T10:12:07.800Z] Copying: 468/1024 [MB] (26 MBps) [2024-12-09T10:12:08.736Z] Copying: 495/1024 [MB] (26 MBps) [2024-12-09T10:12:09.761Z] Copying: 522/1024 [MB] (27 MBps) [2024-12-09T10:12:10.698Z] Copying: 548/1024 [MB] (25 MBps) [2024-12-09T10:12:11.634Z] Copying: 573/1024 [MB] (24 MBps) [2024-12-09T10:12:12.570Z] Copying: 596/1024 [MB] (23 MBps) [2024-12-09T10:12:13.947Z] Copying: 618/1024 [MB] (22 MBps) [2024-12-09T10:12:14.883Z] Copying: 641/1024 [MB] (22 MBps) [2024-12-09T10:12:15.821Z] Copying: 664/1024 [MB] (23 MBps) [2024-12-09T10:12:16.755Z] Copying: 690/1024 [MB] (25 MBps) [2024-12-09T10:12:17.692Z] Copying: 715/1024 [MB] (25 MBps) [2024-12-09T10:12:18.659Z] Copying: 739/1024 [MB] (23 MBps) [2024-12-09T10:12:19.596Z] Copying: 762/1024 [MB] (23 MBps) [2024-12-09T10:12:20.551Z] Copying: 786/1024 [MB] (23 MBps) [2024-12-09T10:12:21.928Z] Copying: 810/1024 [MB] (23 MBps) [2024-12-09T10:12:22.865Z] Copying: 834/1024 [MB] (24 MBps) [2024-12-09T10:12:23.801Z] Copying: 860/1024 [MB] (25 MBps) [2024-12-09T10:12:24.736Z] Copying: 885/1024 [MB] (25 MBps) [2024-12-09T10:12:25.670Z] Copying: 909/1024 [MB] (24 MBps) [2024-12-09T10:12:26.606Z] Copying: 934/1024 [MB] (24 MBps) [2024-12-09T10:12:27.584Z] Copying: 958/1024 [MB] (24 MBps) [2024-12-09T10:12:28.520Z] Copying: 983/1024 [MB] (24 MBps) [2024-12-09T10:12:29.898Z] Copying: 1008/1024 [MB] (25 MBps) [2024-12-09T10:12:30.465Z] Copying: 1023/1024 [MB] (14 MBps) [2024-12-09T10:12:30.465Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 10:12:30.369156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.421 [2024-12-09 10:12:30.369260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:01:23.421 [2024-12-09 10:12:30.369304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:01:23.421 [2024-12-09 10:12:30.369318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.421 [2024-12-09 10:12:30.373042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:01:23.421 [2024-12-09 10:12:30.381055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.421 [2024-12-09 10:12:30.381115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:01:23.421 [2024-12-09 10:12:30.381147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.915 ms 01:01:23.421 [2024-12-09 10:12:30.381159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.421 [2024-12-09 10:12:30.393877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.421 [2024-12-09 10:12:30.393950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:01:23.421 [2024-12-09 10:12:30.393970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.541 ms 01:01:23.421 [2024-12-09 10:12:30.393992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.421 [2024-12-09 10:12:30.418208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.421 [2024-12-09 10:12:30.418287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:01:23.421 [2024-12-09 10:12:30.418308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.189 ms 01:01:23.421 [2024-12-09 10:12:30.418321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.421 [2024-12-09 10:12:30.426172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.421 [2024-12-09 10:12:30.426210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:01:23.421 [2024-12-09 10:12:30.426227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.809 ms 01:01:23.421 [2024-12-09 10:12:30.426264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.421 [2024-12-09 10:12:30.464032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.421 [2024-12-09 10:12:30.464093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:01:23.421 [2024-12-09 10:12:30.464120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.695 ms 01:01:23.421 [2024-12-09 10:12:30.464131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.679 [2024-12-09 10:12:30.485589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.679 [2024-12-09 10:12:30.485662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:01:23.679 [2024-12-09 10:12:30.485695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.397 ms 01:01:23.679 [2024-12-09 10:12:30.485707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.679 [2024-12-09 10:12:30.590502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.679 [2024-12-09 10:12:30.590586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:01:23.679 [2024-12-09 10:12:30.590636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.746 ms 01:01:23.679 [2024-12-09 10:12:30.590649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.679 [2024-12-09 10:12:30.629117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.679 [2024-12-09 10:12:30.629162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:01:23.679 [2024-12-09 10:12:30.629195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.446 ms 01:01:23.679 [2024-12-09 10:12:30.629221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.679 [2024-12-09 10:12:30.666330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.679 [2024-12-09 10:12:30.666419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:01:23.679 [2024-12-09 10:12:30.666450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.022 ms 01:01:23.679 [2024-12-09 10:12:30.666461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.679 [2024-12-09 10:12:30.702457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.679 [2024-12-09 10:12:30.702546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:01:23.679 [2024-12-09 10:12:30.702593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.953 ms 01:01:23.679 [2024-12-09 10:12:30.702619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.939 [2024-12-09 10:12:30.739235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.939 [2024-12-09 10:12:30.739332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:01:23.939 [2024-12-09 10:12:30.739364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.475 ms 01:01:23.939 [2024-12-09 10:12:30.739375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.939 [2024-12-09 10:12:30.739418] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:01:23.939 [2024-12-09 10:12:30.739442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119296 / 261120 wr_cnt: 1 state: open 01:01:23.939 [2024-12-09 10:12:30.739457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.739994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:01:23.939 [2024-12-09 10:12:30.740291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:01:23.940 [2024-12-09 10:12:30.740730] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:01:23.940 [2024-12-09 10:12:30.740742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb 01:01:23.940 [2024-12-09 10:12:30.740754] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119296 01:01:23.940 [2024-12-09 10:12:30.740765] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120256 01:01:23.940 [2024-12-09 10:12:30.740777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119296 01:01:23.940 [2024-12-09 10:12:30.740789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 01:01:23.940 [2024-12-09 10:12:30.740817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:01:23.940 [2024-12-09 10:12:30.740834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:01:23.940 [2024-12-09 10:12:30.740850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:01:23.940 [2024-12-09 10:12:30.740861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:01:23.940 [2024-12-09 10:12:30.740872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:01:23.940 [2024-12-09 10:12:30.740884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.940 [2024-12-09 10:12:30.740896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:01:23.940 [2024-12-09 10:12:30.740909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 01:01:23.940 [2024-12-09 10:12:30.740920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.761638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.940 [2024-12-09 10:12:30.761695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:01:23.940 [2024-12-09 10:12:30.761719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.673 ms 01:01:23.940 [2024-12-09 10:12:30.761731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.762268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:23.940 [2024-12-09 10:12:30.762300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:01:23.940 [2024-12-09 10:12:30.762315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 01:01:23.940 [2024-12-09 10:12:30.762327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.817196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:23.940 [2024-12-09 10:12:30.817314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:01:23.940 [2024-12-09 10:12:30.817350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:23.940 [2024-12-09 10:12:30.817362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.817463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:23.940 [2024-12-09 10:12:30.817480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:01:23.940 [2024-12-09 10:12:30.817492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:23.940 [2024-12-09 10:12:30.817518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.817651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:23.940 [2024-12-09 10:12:30.817677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:01:23.940 [2024-12-09 10:12:30.817690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:23.940 [2024-12-09 10:12:30.817701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.817726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:23.940 [2024-12-09 10:12:30.817741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:01:23.940 [2024-12-09 10:12:30.817756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:23.940 [2024-12-09 10:12:30.817767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:23.940 [2024-12-09 10:12:30.946336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:23.940 [2024-12-09 10:12:30.946463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:01:23.940 [2024-12-09 10:12:30.946497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:23.940 [2024-12-09 10:12:30.946509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.053388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.053545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:01:24.199 [2024-12-09 10:12:31.053566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.053579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.053707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.053727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:01:24.199 [2024-12-09 10:12:31.053756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.053773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.053858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.053876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:01:24.199 [2024-12-09 10:12:31.053888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.053921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.054055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.054086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:01:24.199 [2024-12-09 10:12:31.054102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.054120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.054173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.054193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:01:24.199 [2024-12-09 10:12:31.054206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.054217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.054280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.054299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:01:24.199 [2024-12-09 10:12:31.054312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.054328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.054391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:01:24.199 [2024-12-09 10:12:31.054408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:01:24.199 [2024-12-09 10:12:31.054421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:01:24.199 [2024-12-09 10:12:31.054433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:24.199 [2024-12-09 10:12:31.054584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 687.603 ms, result 0 01:01:26.101 01:01:26.101 01:01:26.101 10:12:32 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 01:01:26.101 [2024-12-09 10:12:32.893811] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:26.101 [2024-12-09 10:12:32.894003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81076 ] 01:01:26.101 [2024-12-09 10:12:33.082587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:26.360 [2024-12-09 10:12:33.235026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:26.945 [2024-12-09 10:12:33.682051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:01:26.945 [2024-12-09 10:12:33.682140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:01:26.945 [2024-12-09 10:12:33.856328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.945 [2024-12-09 10:12:33.856459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:01:26.945 [2024-12-09 10:12:33.856527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:01:26.945 [2024-12-09 10:12:33.856539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.945 [2024-12-09 10:12:33.856649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.945 [2024-12-09 10:12:33.856671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:01:26.945 [2024-12-09 10:12:33.856700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 01:01:26.945 [2024-12-09 10:12:33.856710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.945 [2024-12-09 10:12:33.856743] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:01:26.945 [2024-12-09 10:12:33.857657] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:01:26.945 [2024-12-09 10:12:33.857699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.945 [2024-12-09 10:12:33.857713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:01:26.945 [2024-12-09 10:12:33.857741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 01:01:26.945 [2024-12-09 10:12:33.857772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.945 [2024-12-09 10:12:33.860026] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:01:26.945 [2024-12-09 10:12:33.881042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.945 [2024-12-09 10:12:33.881149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:01:26.945 [2024-12-09 10:12:33.881168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.017 ms 01:01:26.945 [2024-12-09 10:12:33.881180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.945 [2024-12-09 10:12:33.881301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.945 [2024-12-09 10:12:33.881352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:01:26.945 [2024-12-09 10:12:33.881365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 01:01:26.945 [2024-12-09 10:12:33.881377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.945 [2024-12-09 10:12:33.892790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.945 [2024-12-09 10:12:33.892836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:01:26.945 [2024-12-09 10:12:33.892852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.269 ms 01:01:26.945 [2024-12-09 10:12:33.892870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.893063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.946 [2024-12-09 10:12:33.893083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:01:26.946 [2024-12-09 10:12:33.893096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 01:01:26.946 [2024-12-09 10:12:33.893124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.893207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.946 [2024-12-09 10:12:33.893230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:01:26.946 [2024-12-09 10:12:33.893244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:01:26.946 [2024-12-09 10:12:33.893256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.893300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:01:26.946 [2024-12-09 10:12:33.898958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.946 [2024-12-09 10:12:33.899014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:01:26.946 [2024-12-09 10:12:33.899036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.667 ms 01:01:26.946 [2024-12-09 10:12:33.899047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.899092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.946 [2024-12-09 10:12:33.899110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:01:26.946 [2024-12-09 10:12:33.899122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:01:26.946 [2024-12-09 10:12:33.899134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.899183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:01:26.946 [2024-12-09 10:12:33.899220] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:01:26.946 [2024-12-09 10:12:33.899277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:01:26.946 [2024-12-09 10:12:33.899308] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:01:26.946 [2024-12-09 10:12:33.899418] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:01:26.946 [2024-12-09 10:12:33.899434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:01:26.946 [2024-12-09 10:12:33.899449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:01:26.946 [2024-12-09 10:12:33.899465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:01:26.946 [2024-12-09 10:12:33.899479] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:01:26.946 [2024-12-09 10:12:33.899491] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:01:26.946 [2024-12-09 10:12:33.899504] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:01:26.946 [2024-12-09 10:12:33.899529] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:01:26.946 [2024-12-09 10:12:33.899546] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:01:26.946 [2024-12-09 10:12:33.899559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.946 [2024-12-09 10:12:33.899570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:01:26.946 [2024-12-09 10:12:33.899583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 01:01:26.946 [2024-12-09 10:12:33.899594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.899695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.946 [2024-12-09 10:12:33.899714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:01:26.946 [2024-12-09 10:12:33.899727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:01:26.946 [2024-12-09 10:12:33.899738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.946 [2024-12-09 10:12:33.899864] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:01:26.946 [2024-12-09 10:12:33.899887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:01:26.946 [2024-12-09 10:12:33.899906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:01:26.946 [2024-12-09 10:12:33.899925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.899937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:01:26.946 [2024-12-09 10:12:33.899947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.899958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:01:26.946 [2024-12-09 10:12:33.899969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:01:26.946 [2024-12-09 10:12:33.899979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:01:26.946 [2024-12-09 10:12:33.899992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:01:26.946 [2024-12-09 10:12:33.900003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:01:26.946 [2024-12-09 10:12:33.900013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:01:26.946 [2024-12-09 10:12:33.900023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:01:26.946 [2024-12-09 10:12:33.900047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:01:26.946 [2024-12-09 10:12:33.900060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:01:26.946 [2024-12-09 10:12:33.900070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:01:26.946 [2024-12-09 10:12:33.900091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:01:26.946 [2024-12-09 10:12:33.900122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:01:26.946 [2024-12-09 10:12:33.900153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:01:26.946 [2024-12-09 10:12:33.900184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:01:26.946 [2024-12-09 10:12:33.900216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:01:26.946 [2024-12-09 10:12:33.900262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:01:26.946 [2024-12-09 10:12:33.900287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:01:26.946 [2024-12-09 10:12:33.900297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:01:26.946 [2024-12-09 10:12:33.900308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:01:26.946 [2024-12-09 10:12:33.900319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:01:26.946 [2024-12-09 10:12:33.900329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:01:26.946 [2024-12-09 10:12:33.900339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:01:26.946 [2024-12-09 10:12:33.900368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:01:26.946 [2024-12-09 10:12:33.900379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:01:26.946 [2024-12-09 10:12:33.900402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:01:26.946 [2024-12-09 10:12:33.900413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:01:26.946 [2024-12-09 10:12:33.900436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:01:26.946 [2024-12-09 10:12:33.900447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:01:26.946 [2024-12-09 10:12:33.900457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:01:26.946 [2024-12-09 10:12:33.900468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:01:26.946 [2024-12-09 10:12:33.900478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:01:26.946 [2024-12-09 10:12:33.900490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:01:26.946 [2024-12-09 10:12:33.900503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:01:26.946 [2024-12-09 10:12:33.900517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:01:26.946 [2024-12-09 10:12:33.900536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:01:26.946 [2024-12-09 10:12:33.900547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:01:26.946 [2024-12-09 10:12:33.900558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:01:26.946 [2024-12-09 10:12:33.900569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:01:26.946 [2024-12-09 10:12:33.900580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:01:26.946 [2024-12-09 10:12:33.900591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:01:26.946 [2024-12-09 10:12:33.900602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:01:26.946 [2024-12-09 10:12:33.900613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:01:26.946 [2024-12-09 10:12:33.900624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:01:26.946 [2024-12-09 10:12:33.900636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:01:26.946 [2024-12-09 10:12:33.900647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:01:26.947 [2024-12-09 10:12:33.900658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:01:26.947 [2024-12-09 10:12:33.900669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:01:26.947 [2024-12-09 10:12:33.900680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:01:26.947 [2024-12-09 10:12:33.900691] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:01:26.947 [2024-12-09 10:12:33.900704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:01:26.947 [2024-12-09 10:12:33.900716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:01:26.947 [2024-12-09 10:12:33.900727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:01:26.947 [2024-12-09 10:12:33.900757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:01:26.947 [2024-12-09 10:12:33.900770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:01:26.947 [2024-12-09 10:12:33.900782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.947 [2024-12-09 10:12:33.900798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:01:26.947 [2024-12-09 10:12:33.900810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 01:01:26.947 [2024-12-09 10:12:33.900836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.947 [2024-12-09 10:12:33.945919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.947 [2024-12-09 10:12:33.945979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:01:26.947 [2024-12-09 10:12:33.946000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.986 ms 01:01:26.947 [2024-12-09 10:12:33.946018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:26.947 [2024-12-09 10:12:33.946143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:26.947 [2024-12-09 10:12:33.946160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:01:26.947 [2024-12-09 10:12:33.946174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:01:26.947 [2024-12-09 10:12:33.946185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.225 [2024-12-09 10:12:34.009710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.225 [2024-12-09 10:12:34.009789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:01:27.225 [2024-12-09 10:12:34.009823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.399 ms 01:01:27.225 [2024-12-09 10:12:34.009851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.225 [2024-12-09 10:12:34.009966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.225 [2024-12-09 10:12:34.009986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:01:27.225 [2024-12-09 10:12:34.010007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:01:27.225 [2024-12-09 10:12:34.010019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.225 [2024-12-09 10:12:34.010724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.225 [2024-12-09 10:12:34.010758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:01:27.225 [2024-12-09 10:12:34.010774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 01:01:27.225 [2024-12-09 10:12:34.010785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.225 [2024-12-09 10:12:34.010960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.225 [2024-12-09 10:12:34.010980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:01:27.225 [2024-12-09 10:12:34.011001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 01:01:27.225 [2024-12-09 10:12:34.011013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.225 [2024-12-09 10:12:34.035640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.225 [2024-12-09 10:12:34.035757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:01:27.225 [2024-12-09 10:12:34.035791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.589 ms 01:01:27.225 [2024-12-09 10:12:34.035804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.225 [2024-12-09 10:12:34.054335] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 01:01:27.225 [2024-12-09 10:12:34.054415] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:01:27.225 [2024-12-09 10:12:34.054452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.225 [2024-12-09 10:12:34.054475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:01:27.226 [2024-12-09 10:12:34.054493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 01:01:27.226 [2024-12-09 10:12:34.054520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.089593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.089688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:01:27.226 [2024-12-09 10:12:34.089706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.007 ms 01:01:27.226 [2024-12-09 10:12:34.089718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.108159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.108227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:01:27.226 [2024-12-09 10:12:34.108261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.382 ms 01:01:27.226 [2024-12-09 10:12:34.108284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.125873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.125928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:01:27.226 [2024-12-09 10:12:34.125946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 01:01:27.226 [2024-12-09 10:12:34.125957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.126893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.126944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:01:27.226 [2024-12-09 10:12:34.126965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 01:01:27.226 [2024-12-09 10:12:34.126977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.216917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.216990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:01:27.226 [2024-12-09 10:12:34.217018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.911 ms 01:01:27.226 [2024-12-09 10:12:34.217031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.230470] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:01:27.226 [2024-12-09 10:12:34.234506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.234547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:01:27.226 [2024-12-09 10:12:34.234566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 01:01:27.226 [2024-12-09 10:12:34.234579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.234729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.234750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:01:27.226 [2024-12-09 10:12:34.234769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:01:27.226 [2024-12-09 10:12:34.234780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.236885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.236955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:01:27.226 [2024-12-09 10:12:34.236971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 01:01:27.226 [2024-12-09 10:12:34.236981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.237024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.237041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:01:27.226 [2024-12-09 10:12:34.237054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:01:27.226 [2024-12-09 10:12:34.237065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.226 [2024-12-09 10:12:34.237117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:01:27.226 [2024-12-09 10:12:34.237136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.226 [2024-12-09 10:12:34.237148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:01:27.226 [2024-12-09 10:12:34.237160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 01:01:27.226 [2024-12-09 10:12:34.237172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.484 [2024-12-09 10:12:34.272868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.484 [2024-12-09 10:12:34.272933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:01:27.484 [2024-12-09 10:12:34.272975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.666 ms 01:01:27.484 [2024-12-09 10:12:34.272988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.484 [2024-12-09 10:12:34.273080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:01:27.484 [2024-12-09 10:12:34.273100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:01:27.484 [2024-12-09 10:12:34.273113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:01:27.484 [2024-12-09 10:12:34.273124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:01:27.484 [2024-12-09 10:12:34.277049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.159 ms, result 0 01:01:28.860  [2024-12-09T10:12:36.841Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-09T10:12:37.776Z] Copying: 46/1024 [MB] (25 MBps) [2024-12-09T10:12:38.710Z] Copying: 71/1024 [MB] (25 MBps) [2024-12-09T10:12:39.694Z] Copying: 96/1024 [MB] (24 MBps) [2024-12-09T10:12:40.630Z] Copying: 120/1024 [MB] (24 MBps) [2024-12-09T10:12:41.565Z] Copying: 146/1024 [MB] (25 MBps) [2024-12-09T10:12:42.942Z] Copying: 172/1024 [MB] (25 MBps) [2024-12-09T10:12:43.878Z] Copying: 197/1024 [MB] (25 MBps) [2024-12-09T10:12:44.815Z] Copying: 222/1024 [MB] (25 MBps) [2024-12-09T10:12:45.748Z] Copying: 246/1024 [MB] (24 MBps) [2024-12-09T10:12:46.684Z] Copying: 272/1024 [MB] (25 MBps) [2024-12-09T10:12:47.619Z] Copying: 298/1024 [MB] (25 MBps) [2024-12-09T10:12:48.554Z] Copying: 322/1024 [MB] (24 MBps) [2024-12-09T10:12:49.929Z] Copying: 348/1024 [MB] (25 MBps) [2024-12-09T10:12:50.865Z] Copying: 374/1024 [MB] (25 MBps) [2024-12-09T10:12:51.801Z] Copying: 400/1024 [MB] (25 MBps) [2024-12-09T10:12:52.735Z] Copying: 424/1024 [MB] (24 MBps) [2024-12-09T10:12:53.670Z] Copying: 449/1024 [MB] (24 MBps) [2024-12-09T10:12:54.606Z] Copying: 475/1024 [MB] (26 MBps) [2024-12-09T10:12:55.541Z] Copying: 501/1024 [MB] (25 MBps) [2024-12-09T10:12:56.916Z] Copying: 525/1024 [MB] (24 MBps) [2024-12-09T10:12:57.852Z] Copying: 550/1024 [MB] (25 MBps) [2024-12-09T10:12:58.788Z] Copying: 572/1024 [MB] (22 MBps) [2024-12-09T10:12:59.723Z] Copying: 595/1024 [MB] (22 MBps) [2024-12-09T10:13:00.659Z] Copying: 617/1024 [MB] (22 MBps) [2024-12-09T10:13:01.596Z] Copying: 639/1024 [MB] (22 MBps) [2024-12-09T10:13:02.974Z] Copying: 662/1024 [MB] (22 MBps) [2024-12-09T10:13:03.541Z] Copying: 685/1024 [MB] (22 MBps) [2024-12-09T10:13:04.916Z] Copying: 708/1024 [MB] (22 MBps) [2024-12-09T10:13:05.851Z] Copying: 731/1024 [MB] (23 MBps) [2024-12-09T10:13:06.786Z] Copying: 754/1024 [MB] (23 MBps) [2024-12-09T10:13:07.720Z] Copying: 778/1024 [MB] (23 MBps) [2024-12-09T10:13:08.656Z] Copying: 802/1024 [MB] (24 MBps) [2024-12-09T10:13:09.596Z] Copying: 828/1024 [MB] (25 MBps) [2024-12-09T10:13:10.993Z] Copying: 854/1024 [MB] (25 MBps) [2024-12-09T10:13:11.577Z] Copying: 876/1024 [MB] (22 MBps) [2024-12-09T10:13:12.964Z] Copying: 899/1024 [MB] (22 MBps) [2024-12-09T10:13:13.900Z] Copying: 922/1024 [MB] (22 MBps) [2024-12-09T10:13:14.852Z] Copying: 944/1024 [MB] (22 MBps) [2024-12-09T10:13:15.788Z] Copying: 968/1024 [MB] (23 MBps) [2024-12-09T10:13:16.724Z] Copying: 991/1024 [MB] (23 MBps) [2024-12-09T10:13:16.996Z] Copying: 1015/1024 [MB] (23 MBps) [2024-12-09T10:13:17.561Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 10:13:17.330890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.517 [2024-12-09 10:13:17.331228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:02:10.517 [2024-12-09 10:13:17.331382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:02:10.517 [2024-12-09 10:13:17.331504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.517 [2024-12-09 10:13:17.331581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:02:10.517 [2024-12-09 10:13:17.336633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.517 [2024-12-09 10:13:17.336798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:02:10.517 [2024-12-09 10:13:17.336936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 01:02:10.517 [2024-12-09 10:13:17.336963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.517 [2024-12-09 10:13:17.337415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.517 [2024-12-09 10:13:17.337579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:02:10.517 [2024-12-09 10:13:17.337714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 01:02:10.517 [2024-12-09 10:13:17.337750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.517 [2024-12-09 10:13:17.343149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.518 [2024-12-09 10:13:17.343192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:02:10.518 [2024-12-09 10:13:17.343209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.364 ms 01:02:10.518 [2024-12-09 10:13:17.343222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.518 [2024-12-09 10:13:17.350489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.518 [2024-12-09 10:13:17.350524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:02:10.518 [2024-12-09 10:13:17.350539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.212 ms 01:02:10.518 [2024-12-09 10:13:17.350559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.518 [2024-12-09 10:13:17.382825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.518 [2024-12-09 10:13:17.382886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:02:10.518 [2024-12-09 10:13:17.382919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.200 ms 01:02:10.518 [2024-12-09 10:13:17.382931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.518 [2024-12-09 10:13:17.400220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.518 [2024-12-09 10:13:17.400283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:02:10.518 [2024-12-09 10:13:17.400301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.244 ms 01:02:10.518 [2024-12-09 10:13:17.400313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.518 [2024-12-09 10:13:17.513995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.518 [2024-12-09 10:13:17.514061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:02:10.518 [2024-12-09 10:13:17.514082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.628 ms 01:02:10.518 [2024-12-09 10:13:17.514095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.518 [2024-12-09 10:13:17.546096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.518 [2024-12-09 10:13:17.546150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:02:10.518 [2024-12-09 10:13:17.546169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.975 ms 01:02:10.518 [2024-12-09 10:13:17.546181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.777 [2024-12-09 10:13:17.576370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.777 [2024-12-09 10:13:17.576431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:02:10.777 [2024-12-09 10:13:17.576448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.143 ms 01:02:10.777 [2024-12-09 10:13:17.576459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.777 [2024-12-09 10:13:17.606384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.777 [2024-12-09 10:13:17.606438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:02:10.777 [2024-12-09 10:13:17.606456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.876 ms 01:02:10.777 [2024-12-09 10:13:17.606469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.777 [2024-12-09 10:13:17.636874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.777 [2024-12-09 10:13:17.636940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:02:10.777 [2024-12-09 10:13:17.636960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.288 ms 01:02:10.777 [2024-12-09 10:13:17.636971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.777 [2024-12-09 10:13:17.637048] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:02:10.777 [2024-12-09 10:13:17.637074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 01:02:10.777 [2024-12-09 10:13:17.637089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:02:10.777 [2024-12-09 10:13:17.637335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.637994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:02:10.778 [2024-12-09 10:13:17.638292] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:02:10.778 [2024-12-09 10:13:17.638304] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e44c0e4-883f-47a4-b8a4-dbaa5e36b4fb 01:02:10.778 [2024-12-09 10:13:17.638317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 01:02:10.778 [2024-12-09 10:13:17.638328] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12736 01:02:10.778 [2024-12-09 10:13:17.638339] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11776 01:02:10.778 [2024-12-09 10:13:17.638351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0815 01:02:10.778 [2024-12-09 10:13:17.638373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:02:10.778 [2024-12-09 10:13:17.638399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:02:10.779 [2024-12-09 10:13:17.638410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:02:10.779 [2024-12-09 10:13:17.638420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:02:10.779 [2024-12-09 10:13:17.638430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:02:10.779 [2024-12-09 10:13:17.638441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.779 [2024-12-09 10:13:17.638466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:02:10.779 [2024-12-09 10:13:17.638478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 01:02:10.779 [2024-12-09 10:13:17.638490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.655919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.779 [2024-12-09 10:13:17.655977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:02:10.779 [2024-12-09 10:13:17.656007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.341 ms 01:02:10.779 [2024-12-09 10:13:17.656020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.656563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:10.779 [2024-12-09 10:13:17.656592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:02:10.779 [2024-12-09 10:13:17.656607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 01:02:10.779 [2024-12-09 10:13:17.656619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.700911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:10.779 [2024-12-09 10:13:17.700985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:02:10.779 [2024-12-09 10:13:17.701004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:10.779 [2024-12-09 10:13:17.701016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.701108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:10.779 [2024-12-09 10:13:17.701125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:02:10.779 [2024-12-09 10:13:17.701137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:10.779 [2024-12-09 10:13:17.701148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.701263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:10.779 [2024-12-09 10:13:17.701285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:02:10.779 [2024-12-09 10:13:17.701306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:10.779 [2024-12-09 10:13:17.701318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.701343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:10.779 [2024-12-09 10:13:17.701358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:02:10.779 [2024-12-09 10:13:17.701371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:10.779 [2024-12-09 10:13:17.701382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:10.779 [2024-12-09 10:13:17.811242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:10.779 [2024-12-09 10:13:17.811321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:02:10.779 [2024-12-09 10:13:17.811341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:10.779 [2024-12-09 10:13:17.811353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.898303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.898384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:02:11.037 [2024-12-09 10:13:17.898404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.898434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.898552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.898571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:02:11.037 [2024-12-09 10:13:17.898584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.898601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.898654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.898671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:02:11.037 [2024-12-09 10:13:17.898683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.898695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.898828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.898862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:02:11.037 [2024-12-09 10:13:17.898876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.898887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.898959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.898978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:02:11.037 [2024-12-09 10:13:17.898990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.899002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.899071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.899092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:02:11.037 [2024-12-09 10:13:17.899105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.899116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.899181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:02:11.037 [2024-12-09 10:13:17.899204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:02:11.037 [2024-12-09 10:13:17.899217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:02:11.037 [2024-12-09 10:13:17.899229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:11.037 [2024-12-09 10:13:17.899421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.470 ms, result 0 01:02:11.971 01:02:11.971 01:02:11.971 10:13:18 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:02:14.501 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79453 01:02:14.501 10:13:21 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79453 ']' 01:02:14.501 10:13:21 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79453 01:02:14.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79453) - No such process 01:02:14.501 Process with pid 79453 is not found 01:02:14.501 Remove shared memory files 01:02:14.501 10:13:21 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79453 is not found' 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:02:14.501 10:13:21 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 01:02:14.501 01:02:14.501 real 3m29.835s 01:02:14.501 user 3m13.342s 01:02:14.501 sys 0m18.706s 01:02:14.501 10:13:21 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:14.501 10:13:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 01:02:14.501 ************************************ 01:02:14.501 END TEST ftl_restore 01:02:14.501 ************************************ 01:02:14.501 10:13:21 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 01:02:14.501 10:13:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:02:14.501 10:13:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:14.501 10:13:21 ftl -- common/autotest_common.sh@10 -- # set +x 01:02:14.501 ************************************ 01:02:14.501 START TEST ftl_dirty_shutdown 01:02:14.501 ************************************ 01:02:14.501 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 01:02:14.501 * Looking for test storage... 01:02:14.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:02:14.501 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:14.501 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 01:02:14.501 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:14.761 --rc genhtml_branch_coverage=1 01:02:14.761 --rc genhtml_function_coverage=1 01:02:14.761 --rc genhtml_legend=1 01:02:14.761 --rc geninfo_all_blocks=1 01:02:14.761 --rc geninfo_unexecuted_blocks=1 01:02:14.761 01:02:14.761 ' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:14.761 --rc genhtml_branch_coverage=1 01:02:14.761 --rc genhtml_function_coverage=1 01:02:14.761 --rc genhtml_legend=1 01:02:14.761 --rc geninfo_all_blocks=1 01:02:14.761 --rc geninfo_unexecuted_blocks=1 01:02:14.761 01:02:14.761 ' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:14.761 --rc genhtml_branch_coverage=1 01:02:14.761 --rc genhtml_function_coverage=1 01:02:14.761 --rc genhtml_legend=1 01:02:14.761 --rc geninfo_all_blocks=1 01:02:14.761 --rc geninfo_unexecuted_blocks=1 01:02:14.761 01:02:14.761 ' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:14.761 --rc genhtml_branch_coverage=1 01:02:14.761 --rc genhtml_function_coverage=1 01:02:14.761 --rc genhtml_legend=1 01:02:14.761 --rc geninfo_all_blocks=1 01:02:14.761 --rc geninfo_unexecuted_blocks=1 01:02:14.761 01:02:14.761 ' 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:02:14.761 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81624 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81624 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81624 ']' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:02:14.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:14.762 10:13:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 01:02:14.762 [2024-12-09 10:13:21.801291] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:14.762 [2024-12-09 10:13:21.801468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81624 ] 01:02:15.021 [2024-12-09 10:13:21.988152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:15.280 [2024-12-09 10:13:22.109245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 01:02:15.847 10:13:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:02:16.414 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:02:16.673 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:02:16.673 { 01:02:16.673 "name": "nvme0n1", 01:02:16.673 "aliases": [ 01:02:16.673 "c238d74e-bbb5-4a22-a126-a6c162b5ddd2" 01:02:16.673 ], 01:02:16.673 "product_name": "NVMe disk", 01:02:16.673 "block_size": 4096, 01:02:16.673 "num_blocks": 1310720, 01:02:16.673 "uuid": "c238d74e-bbb5-4a22-a126-a6c162b5ddd2", 01:02:16.673 "numa_id": -1, 01:02:16.673 "assigned_rate_limits": { 01:02:16.673 "rw_ios_per_sec": 0, 01:02:16.673 "rw_mbytes_per_sec": 0, 01:02:16.673 "r_mbytes_per_sec": 0, 01:02:16.673 "w_mbytes_per_sec": 0 01:02:16.673 }, 01:02:16.673 "claimed": true, 01:02:16.673 "claim_type": "read_many_write_one", 01:02:16.673 "zoned": false, 01:02:16.673 "supported_io_types": { 01:02:16.673 "read": true, 01:02:16.673 "write": true, 01:02:16.673 "unmap": true, 01:02:16.673 "flush": true, 01:02:16.673 "reset": true, 01:02:16.673 "nvme_admin": true, 01:02:16.673 "nvme_io": true, 01:02:16.673 "nvme_io_md": false, 01:02:16.673 "write_zeroes": true, 01:02:16.673 "zcopy": false, 01:02:16.673 "get_zone_info": false, 01:02:16.673 "zone_management": false, 01:02:16.673 "zone_append": false, 01:02:16.673 "compare": true, 01:02:16.673 "compare_and_write": false, 01:02:16.673 "abort": true, 01:02:16.673 "seek_hole": false, 01:02:16.673 "seek_data": false, 01:02:16.673 "copy": true, 01:02:16.673 "nvme_iov_md": false 01:02:16.673 }, 01:02:16.673 "driver_specific": { 01:02:16.673 "nvme": [ 01:02:16.673 { 01:02:16.673 "pci_address": "0000:00:11.0", 01:02:16.673 "trid": { 01:02:16.673 "trtype": "PCIe", 01:02:16.673 "traddr": "0000:00:11.0" 01:02:16.673 }, 01:02:16.673 "ctrlr_data": { 01:02:16.673 "cntlid": 0, 01:02:16.673 "vendor_id": "0x1b36", 01:02:16.673 "model_number": "QEMU NVMe Ctrl", 01:02:16.673 "serial_number": "12341", 01:02:16.673 "firmware_revision": "8.0.0", 01:02:16.673 "subnqn": "nqn.2019-08.org.qemu:12341", 01:02:16.673 "oacs": { 01:02:16.673 "security": 0, 01:02:16.673 "format": 1, 01:02:16.674 "firmware": 0, 01:02:16.674 "ns_manage": 1 01:02:16.674 }, 01:02:16.674 "multi_ctrlr": false, 01:02:16.674 "ana_reporting": false 01:02:16.674 }, 01:02:16.674 "vs": { 01:02:16.674 "nvme_version": "1.4" 01:02:16.674 }, 01:02:16.674 "ns_data": { 01:02:16.674 "id": 1, 01:02:16.674 "can_share": false 01:02:16.674 } 01:02:16.674 } 01:02:16.674 ], 01:02:16.674 "mp_policy": "active_passive" 01:02:16.674 } 01:02:16.674 } 01:02:16.674 ]' 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:02:16.674 10:13:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:02:17.240 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=40fa4415-531a-41a1-8e9a-aa32eea5f7fc 01:02:17.240 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 01:02:17.240 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40fa4415-531a-41a1-8e9a-aa32eea5f7fc 01:02:17.499 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:02:17.758 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=822c94d0-8e9b-417a-9b2a-dbeb634e3244 01:02:17.758 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 822c94d0-8e9b-417a-9b2a-dbeb634e3244 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:02:18.017 10:13:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.275 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:02:18.275 { 01:02:18.275 "name": "9f106237-75a0-40b0-aa08-be9e828b25d2", 01:02:18.275 "aliases": [ 01:02:18.275 "lvs/nvme0n1p0" 01:02:18.275 ], 01:02:18.275 "product_name": "Logical Volume", 01:02:18.275 "block_size": 4096, 01:02:18.275 "num_blocks": 26476544, 01:02:18.275 "uuid": "9f106237-75a0-40b0-aa08-be9e828b25d2", 01:02:18.275 "assigned_rate_limits": { 01:02:18.275 "rw_ios_per_sec": 0, 01:02:18.275 "rw_mbytes_per_sec": 0, 01:02:18.275 "r_mbytes_per_sec": 0, 01:02:18.275 "w_mbytes_per_sec": 0 01:02:18.275 }, 01:02:18.275 "claimed": false, 01:02:18.275 "zoned": false, 01:02:18.275 "supported_io_types": { 01:02:18.275 "read": true, 01:02:18.275 "write": true, 01:02:18.275 "unmap": true, 01:02:18.275 "flush": false, 01:02:18.275 "reset": true, 01:02:18.275 "nvme_admin": false, 01:02:18.275 "nvme_io": false, 01:02:18.275 "nvme_io_md": false, 01:02:18.275 "write_zeroes": true, 01:02:18.275 "zcopy": false, 01:02:18.275 "get_zone_info": false, 01:02:18.275 "zone_management": false, 01:02:18.275 "zone_append": false, 01:02:18.275 "compare": false, 01:02:18.275 "compare_and_write": false, 01:02:18.275 "abort": false, 01:02:18.275 "seek_hole": true, 01:02:18.275 "seek_data": true, 01:02:18.275 "copy": false, 01:02:18.275 "nvme_iov_md": false 01:02:18.275 }, 01:02:18.275 "driver_specific": { 01:02:18.275 "lvol": { 01:02:18.275 "lvol_store_uuid": "822c94d0-8e9b-417a-9b2a-dbeb634e3244", 01:02:18.275 "base_bdev": "nvme0n1", 01:02:18.275 "thin_provision": true, 01:02:18.275 "num_allocated_clusters": 0, 01:02:18.275 "snapshot": false, 01:02:18.275 "clone": false, 01:02:18.275 "esnap_clone": false 01:02:18.275 } 01:02:18.275 } 01:02:18.275 } 01:02:18.275 ]' 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 01:02:18.534 10:13:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:02:18.793 10:13:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:02:19.360 { 01:02:19.360 "name": "9f106237-75a0-40b0-aa08-be9e828b25d2", 01:02:19.360 "aliases": [ 01:02:19.360 "lvs/nvme0n1p0" 01:02:19.360 ], 01:02:19.360 "product_name": "Logical Volume", 01:02:19.360 "block_size": 4096, 01:02:19.360 "num_blocks": 26476544, 01:02:19.360 "uuid": "9f106237-75a0-40b0-aa08-be9e828b25d2", 01:02:19.360 "assigned_rate_limits": { 01:02:19.360 "rw_ios_per_sec": 0, 01:02:19.360 "rw_mbytes_per_sec": 0, 01:02:19.360 "r_mbytes_per_sec": 0, 01:02:19.360 "w_mbytes_per_sec": 0 01:02:19.360 }, 01:02:19.360 "claimed": false, 01:02:19.360 "zoned": false, 01:02:19.360 "supported_io_types": { 01:02:19.360 "read": true, 01:02:19.360 "write": true, 01:02:19.360 "unmap": true, 01:02:19.360 "flush": false, 01:02:19.360 "reset": true, 01:02:19.360 "nvme_admin": false, 01:02:19.360 "nvme_io": false, 01:02:19.360 "nvme_io_md": false, 01:02:19.360 "write_zeroes": true, 01:02:19.360 "zcopy": false, 01:02:19.360 "get_zone_info": false, 01:02:19.360 "zone_management": false, 01:02:19.360 "zone_append": false, 01:02:19.360 "compare": false, 01:02:19.360 "compare_and_write": false, 01:02:19.360 "abort": false, 01:02:19.360 "seek_hole": true, 01:02:19.360 "seek_data": true, 01:02:19.360 "copy": false, 01:02:19.360 "nvme_iov_md": false 01:02:19.360 }, 01:02:19.360 "driver_specific": { 01:02:19.360 "lvol": { 01:02:19.360 "lvol_store_uuid": "822c94d0-8e9b-417a-9b2a-dbeb634e3244", 01:02:19.360 "base_bdev": "nvme0n1", 01:02:19.360 "thin_provision": true, 01:02:19.360 "num_allocated_clusters": 0, 01:02:19.360 "snapshot": false, 01:02:19.360 "clone": false, 01:02:19.360 "esnap_clone": false 01:02:19.360 } 01:02:19.360 } 01:02:19.360 } 01:02:19.360 ]' 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 01:02:19.360 10:13:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:02:19.618 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f106237-75a0-40b0-aa08-be9e828b25d2 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:02:19.875 { 01:02:19.875 "name": "9f106237-75a0-40b0-aa08-be9e828b25d2", 01:02:19.875 "aliases": [ 01:02:19.875 "lvs/nvme0n1p0" 01:02:19.875 ], 01:02:19.875 "product_name": "Logical Volume", 01:02:19.875 "block_size": 4096, 01:02:19.875 "num_blocks": 26476544, 01:02:19.875 "uuid": "9f106237-75a0-40b0-aa08-be9e828b25d2", 01:02:19.875 "assigned_rate_limits": { 01:02:19.875 "rw_ios_per_sec": 0, 01:02:19.875 "rw_mbytes_per_sec": 0, 01:02:19.875 "r_mbytes_per_sec": 0, 01:02:19.875 "w_mbytes_per_sec": 0 01:02:19.875 }, 01:02:19.875 "claimed": false, 01:02:19.875 "zoned": false, 01:02:19.875 "supported_io_types": { 01:02:19.875 "read": true, 01:02:19.875 "write": true, 01:02:19.875 "unmap": true, 01:02:19.875 "flush": false, 01:02:19.875 "reset": true, 01:02:19.875 "nvme_admin": false, 01:02:19.875 "nvme_io": false, 01:02:19.875 "nvme_io_md": false, 01:02:19.875 "write_zeroes": true, 01:02:19.875 "zcopy": false, 01:02:19.875 "get_zone_info": false, 01:02:19.875 "zone_management": false, 01:02:19.875 "zone_append": false, 01:02:19.875 "compare": false, 01:02:19.875 "compare_and_write": false, 01:02:19.875 "abort": false, 01:02:19.875 "seek_hole": true, 01:02:19.875 "seek_data": true, 01:02:19.875 "copy": false, 01:02:19.875 "nvme_iov_md": false 01:02:19.875 }, 01:02:19.875 "driver_specific": { 01:02:19.875 "lvol": { 01:02:19.875 "lvol_store_uuid": "822c94d0-8e9b-417a-9b2a-dbeb634e3244", 01:02:19.875 "base_bdev": "nvme0n1", 01:02:19.875 "thin_provision": true, 01:02:19.875 "num_allocated_clusters": 0, 01:02:19.875 "snapshot": false, 01:02:19.875 "clone": false, 01:02:19.875 "esnap_clone": false 01:02:19.875 } 01:02:19.875 } 01:02:19.875 } 01:02:19.875 ]' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9f106237-75a0-40b0-aa08-be9e828b25d2 --l2p_dram_limit 10' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 01:02:19.875 10:13:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9f106237-75a0-40b0-aa08-be9e828b25d2 --l2p_dram_limit 10 -c nvc0n1p0 01:02:20.442 [2024-12-09 10:13:27.199519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.199578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:02:20.442 [2024-12-09 10:13:27.199604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:02:20.442 [2024-12-09 10:13:27.199618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.199703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.199722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:02:20.442 [2024-12-09 10:13:27.199739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 01:02:20.442 [2024-12-09 10:13:27.199752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.199795] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:02:20.442 [2024-12-09 10:13:27.200920] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:02:20.442 [2024-12-09 10:13:27.200962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.200993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:02:20.442 [2024-12-09 10:13:27.201008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 01:02:20.442 [2024-12-09 10:13:27.201021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.201178] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9c7a4782-9886-43d6-8da8-01df0a702c96 01:02:20.442 [2024-12-09 10:13:27.203296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.203344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:02:20.442 [2024-12-09 10:13:27.203361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 01:02:20.442 [2024-12-09 10:13:27.203378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.214688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.214746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:02:20.442 [2024-12-09 10:13:27.214780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.230 ms 01:02:20.442 [2024-12-09 10:13:27.214795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.214923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.214965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:02:20.442 [2024-12-09 10:13:27.214987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 01:02:20.442 [2024-12-09 10:13:27.215019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.215124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.215156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:02:20.442 [2024-12-09 10:13:27.215173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:02:20.442 [2024-12-09 10:13:27.215188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.215224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:02:20.442 [2024-12-09 10:13:27.221173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.221249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:02:20.442 [2024-12-09 10:13:27.221280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.956 ms 01:02:20.442 [2024-12-09 10:13:27.221295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.221346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.442 [2024-12-09 10:13:27.221362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:02:20.442 [2024-12-09 10:13:27.221378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:02:20.442 [2024-12-09 10:13:27.221400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.442 [2024-12-09 10:13:27.221450] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:02:20.442 [2024-12-09 10:13:27.221615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:02:20.442 [2024-12-09 10:13:27.221639] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:02:20.442 [2024-12-09 10:13:27.221658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:02:20.442 [2024-12-09 10:13:27.221687] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:02:20.443 [2024-12-09 10:13:27.221701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:02:20.443 [2024-12-09 10:13:27.221717] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:02:20.443 [2024-12-09 10:13:27.221733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:02:20.443 [2024-12-09 10:13:27.221762] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:02:20.443 [2024-12-09 10:13:27.221785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:02:20.443 [2024-12-09 10:13:27.221800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.443 [2024-12-09 10:13:27.221825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:02:20.443 [2024-12-09 10:13:27.221850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 01:02:20.443 [2024-12-09 10:13:27.221862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.443 [2024-12-09 10:13:27.221976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.443 [2024-12-09 10:13:27.221999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:02:20.443 [2024-12-09 10:13:27.222015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 01:02:20.443 [2024-12-09 10:13:27.222027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.443 [2024-12-09 10:13:27.222154] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:02:20.443 [2024-12-09 10:13:27.222173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:02:20.443 [2024-12-09 10:13:27.222189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:02:20.443 [2024-12-09 10:13:27.222229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:02:20.443 [2024-12-09 10:13:27.222287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:02:20.443 [2024-12-09 10:13:27.222314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:02:20.443 [2024-12-09 10:13:27.222325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:02:20.443 [2024-12-09 10:13:27.222346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:02:20.443 [2024-12-09 10:13:27.222358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:02:20.443 [2024-12-09 10:13:27.222371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:02:20.443 [2024-12-09 10:13:27.222382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:02:20.443 [2024-12-09 10:13:27.222410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:02:20.443 [2024-12-09 10:13:27.222453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:02:20.443 [2024-12-09 10:13:27.222496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:02:20.443 [2024-12-09 10:13:27.222534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:02:20.443 [2024-12-09 10:13:27.222570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:02:20.443 [2024-12-09 10:13:27.222611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:02:20.443 [2024-12-09 10:13:27.222637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:02:20.443 [2024-12-09 10:13:27.222649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:02:20.443 [2024-12-09 10:13:27.222664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:02:20.443 [2024-12-09 10:13:27.222675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:02:20.443 [2024-12-09 10:13:27.222689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:02:20.443 [2024-12-09 10:13:27.222700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:02:20.443 [2024-12-09 10:13:27.222726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:02:20.443 [2024-12-09 10:13:27.222739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222750] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:02:20.443 [2024-12-09 10:13:27.222765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:02:20.443 [2024-12-09 10:13:27.222777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:02:20.443 [2024-12-09 10:13:27.222803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:02:20.443 [2024-12-09 10:13:27.222820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:02:20.443 [2024-12-09 10:13:27.222832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:02:20.443 [2024-12-09 10:13:27.222846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:02:20.443 [2024-12-09 10:13:27.222858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:02:20.443 [2024-12-09 10:13:27.222880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:02:20.443 [2024-12-09 10:13:27.222901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:02:20.443 [2024-12-09 10:13:27.222924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.222938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:02:20.443 [2024-12-09 10:13:27.222953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:02:20.443 [2024-12-09 10:13:27.222965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:02:20.443 [2024-12-09 10:13:27.222979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:02:20.443 [2024-12-09 10:13:27.222992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:02:20.443 [2024-12-09 10:13:27.223006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:02:20.443 [2024-12-09 10:13:27.223019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:02:20.443 [2024-12-09 10:13:27.223035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:02:20.443 [2024-12-09 10:13:27.223047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:02:20.443 [2024-12-09 10:13:27.223065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.223084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.223099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.223111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.223127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:02:20.443 [2024-12-09 10:13:27.223149] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:02:20.443 [2024-12-09 10:13:27.223176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.223190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:02:20.443 [2024-12-09 10:13:27.223205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:02:20.443 [2024-12-09 10:13:27.223226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:02:20.443 [2024-12-09 10:13:27.223266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:02:20.443 [2024-12-09 10:13:27.223283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:20.443 [2024-12-09 10:13:27.223298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:02:20.443 [2024-12-09 10:13:27.223312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 01:02:20.443 [2024-12-09 10:13:27.223326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:20.443 [2024-12-09 10:13:27.223385] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:02:20.443 [2024-12-09 10:13:27.223408] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:02:23.036 [2024-12-09 10:13:29.934752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:29.934869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:02:23.036 [2024-12-09 10:13:29.934893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2711.375 ms 01:02:23.036 [2024-12-09 10:13:29.934910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:29.977147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:29.977227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:02:23.036 [2024-12-09 10:13:29.977261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.926 ms 01:02:23.036 [2024-12-09 10:13:29.977280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:29.977513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:29.977542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:02:23.036 [2024-12-09 10:13:29.977558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 01:02:23.036 [2024-12-09 10:13:29.977582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:30.026750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:30.026835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:02:23.036 [2024-12-09 10:13:30.026857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.084 ms 01:02:23.036 [2024-12-09 10:13:30.026872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:30.026947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:30.026973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:02:23.036 [2024-12-09 10:13:30.026987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:02:23.036 [2024-12-09 10:13:30.027015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:30.027668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:30.027704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:02:23.036 [2024-12-09 10:13:30.027721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 01:02:23.036 [2024-12-09 10:13:30.027736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:30.027892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:30.027913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:02:23.036 [2024-12-09 10:13:30.027930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 01:02:23.036 [2024-12-09 10:13:30.027948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:30.051335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.036 [2024-12-09 10:13:30.051414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:02:23.036 [2024-12-09 10:13:30.051435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.359 ms 01:02:23.036 [2024-12-09 10:13:30.051451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.036 [2024-12-09 10:13:30.078712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:02:23.295 [2024-12-09 10:13:30.083770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.295 [2024-12-09 10:13:30.083845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:02:23.295 [2024-12-09 10:13:30.083874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.189 ms 01:02:23.295 [2024-12-09 10:13:30.083888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.295 [2024-12-09 10:13:30.163452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.295 [2024-12-09 10:13:30.163547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:02:23.295 [2024-12-09 10:13:30.163574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.497 ms 01:02:23.295 [2024-12-09 10:13:30.163589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.295 [2024-12-09 10:13:30.163864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.295 [2024-12-09 10:13:30.163901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:02:23.295 [2024-12-09 10:13:30.163923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 01:02:23.295 [2024-12-09 10:13:30.163936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.295 [2024-12-09 10:13:30.199111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.295 [2024-12-09 10:13:30.199202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:02:23.295 [2024-12-09 10:13:30.199243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.095 ms 01:02:23.295 [2024-12-09 10:13:30.199257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.295 [2024-12-09 10:13:30.233350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.295 [2024-12-09 10:13:30.233425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:02:23.295 [2024-12-09 10:13:30.233450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.014 ms 01:02:23.295 [2024-12-09 10:13:30.233463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.296 [2024-12-09 10:13:30.234456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.296 [2024-12-09 10:13:30.234530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:02:23.296 [2024-12-09 10:13:30.234581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 01:02:23.296 [2024-12-09 10:13:30.234611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.296 [2024-12-09 10:13:30.331227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.296 [2024-12-09 10:13:30.331327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:02:23.296 [2024-12-09 10:13:30.331372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.532 ms 01:02:23.296 [2024-12-09 10:13:30.331386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.555 [2024-12-09 10:13:30.369916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.555 [2024-12-09 10:13:30.370154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:02:23.555 [2024-12-09 10:13:30.370205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.313 ms 01:02:23.555 [2024-12-09 10:13:30.370235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.555 [2024-12-09 10:13:30.416350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.555 [2024-12-09 10:13:30.416415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:02:23.555 [2024-12-09 10:13:30.416456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.856 ms 01:02:23.555 [2024-12-09 10:13:30.416472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.555 [2024-12-09 10:13:30.462169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.555 [2024-12-09 10:13:30.462285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:02:23.555 [2024-12-09 10:13:30.462325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.618 ms 01:02:23.555 [2024-12-09 10:13:30.462342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.555 [2024-12-09 10:13:30.462431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.555 [2024-12-09 10:13:30.462459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:02:23.555 [2024-12-09 10:13:30.462488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:02:23.555 [2024-12-09 10:13:30.462505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.555 [2024-12-09 10:13:30.462677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:02:23.555 [2024-12-09 10:13:30.462716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:02:23.555 [2024-12-09 10:13:30.462739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 01:02:23.555 [2024-12-09 10:13:30.462756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:02:23.555 [2024-12-09 10:13:30.464856] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3264.479 ms, result 0 01:02:23.555 { 01:02:23.555 "name": "ftl0", 01:02:23.555 "uuid": "9c7a4782-9886-43d6-8da8-01df0a702c96" 01:02:23.555 } 01:02:23.555 10:13:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 01:02:23.555 10:13:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:02:23.814 10:13:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 01:02:23.814 10:13:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 01:02:23.814 10:13:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 01:02:24.382 /dev/nbd0 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 01:02:24.382 1+0 records in 01:02:24.382 1+0 records out 01:02:24.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379312 s, 10.8 MB/s 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 01:02:24.382 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 01:02:24.382 [2024-12-09 10:13:31.297929] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:24.382 [2024-12-09 10:13:31.298162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81773 ] 01:02:24.641 [2024-12-09 10:13:31.491027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:24.641 [2024-12-09 10:13:31.650941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:26.018  [2024-12-09T10:13:34.440Z] Copying: 149/1024 [MB] (149 MBps) [2024-12-09T10:13:35.377Z] Copying: 297/1024 [MB] (147 MBps) [2024-12-09T10:13:36.314Z] Copying: 449/1024 [MB] (151 MBps) [2024-12-09T10:13:37.251Z] Copying: 604/1024 [MB] (155 MBps) [2024-12-09T10:13:38.187Z] Copying: 766/1024 [MB] (162 MBps) [2024-12-09T10:13:38.759Z] Copying: 928/1024 [MB] (161 MBps) [2024-12-09T10:13:40.136Z] Copying: 1024/1024 [MB] (average 155 MBps) 01:02:33.092 01:02:33.092 10:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:02:35.625 10:13:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 01:02:35.625 [2024-12-09 10:13:42.282961] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:35.625 [2024-12-09 10:13:42.283419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81883 ] 01:02:35.625 [2024-12-09 10:13:42.469852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:35.625 [2024-12-09 10:13:42.644710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:37.001  [2024-12-09T10:13:45.420Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-09T10:13:46.355Z] Copying: 27/1024 [MB] (13 MBps) [2024-12-09T10:13:47.289Z] Copying: 41/1024 [MB] (14 MBps) [2024-12-09T10:13:48.225Z] Copying: 56/1024 [MB] (15 MBps) [2024-12-09T10:13:49.187Z] Copying: 71/1024 [MB] (15 MBps) [2024-12-09T10:13:50.123Z] Copying: 86/1024 [MB] (15 MBps) [2024-12-09T10:13:51.059Z] Copying: 101/1024 [MB] (15 MBps) [2024-12-09T10:13:52.436Z] Copying: 118/1024 [MB] (16 MBps) [2024-12-09T10:13:53.371Z] Copying: 133/1024 [MB] (15 MBps) [2024-12-09T10:13:54.306Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-09T10:13:55.242Z] Copying: 165/1024 [MB] (15 MBps) [2024-12-09T10:13:56.179Z] Copying: 179/1024 [MB] (14 MBps) [2024-12-09T10:13:57.116Z] Copying: 194/1024 [MB] (14 MBps) [2024-12-09T10:13:58.052Z] Copying: 209/1024 [MB] (14 MBps) [2024-12-09T10:13:59.452Z] Copying: 224/1024 [MB] (14 MBps) [2024-12-09T10:14:00.019Z] Copying: 239/1024 [MB] (14 MBps) [2024-12-09T10:14:01.397Z] Copying: 254/1024 [MB] (14 MBps) [2024-12-09T10:14:02.333Z] Copying: 268/1024 [MB] (14 MBps) [2024-12-09T10:14:03.274Z] Copying: 283/1024 [MB] (14 MBps) [2024-12-09T10:14:04.208Z] Copying: 299/1024 [MB] (15 MBps) [2024-12-09T10:14:05.141Z] Copying: 314/1024 [MB] (15 MBps) [2024-12-09T10:14:06.073Z] Copying: 330/1024 [MB] (16 MBps) [2024-12-09T10:14:07.446Z] Copying: 344/1024 [MB] (13 MBps) [2024-12-09T10:14:08.018Z] Copying: 359/1024 [MB] (15 MBps) [2024-12-09T10:14:09.393Z] Copying: 374/1024 [MB] (14 MBps) [2024-12-09T10:14:10.328Z] Copying: 387/1024 [MB] (13 MBps) [2024-12-09T10:14:11.264Z] Copying: 401/1024 [MB] (13 MBps) [2024-12-09T10:14:12.200Z] Copying: 415/1024 [MB] (13 MBps) [2024-12-09T10:14:13.134Z] Copying: 428/1024 [MB] (13 MBps) [2024-12-09T10:14:14.071Z] Copying: 442/1024 [MB] (13 MBps) [2024-12-09T10:14:15.447Z] Copying: 456/1024 [MB] (13 MBps) [2024-12-09T10:14:16.383Z] Copying: 470/1024 [MB] (14 MBps) [2024-12-09T10:14:17.317Z] Copying: 485/1024 [MB] (15 MBps) [2024-12-09T10:14:18.254Z] Copying: 500/1024 [MB] (14 MBps) [2024-12-09T10:14:19.274Z] Copying: 515/1024 [MB] (14 MBps) [2024-12-09T10:14:20.210Z] Copying: 530/1024 [MB] (14 MBps) [2024-12-09T10:14:21.147Z] Copying: 544/1024 [MB] (14 MBps) [2024-12-09T10:14:22.085Z] Copying: 559/1024 [MB] (14 MBps) [2024-12-09T10:14:23.020Z] Copying: 574/1024 [MB] (14 MBps) [2024-12-09T10:14:24.396Z] Copying: 588/1024 [MB] (14 MBps) [2024-12-09T10:14:25.331Z] Copying: 603/1024 [MB] (14 MBps) [2024-12-09T10:14:26.266Z] Copying: 618/1024 [MB] (14 MBps) [2024-12-09T10:14:27.202Z] Copying: 633/1024 [MB] (15 MBps) [2024-12-09T10:14:28.137Z] Copying: 648/1024 [MB] (14 MBps) [2024-12-09T10:14:29.074Z] Copying: 663/1024 [MB] (14 MBps) [2024-12-09T10:14:30.023Z] Copying: 678/1024 [MB] (14 MBps) [2024-12-09T10:14:31.399Z] Copying: 692/1024 [MB] (14 MBps) [2024-12-09T10:14:32.336Z] Copying: 707/1024 [MB] (14 MBps) [2024-12-09T10:14:33.271Z] Copying: 722/1024 [MB] (14 MBps) [2024-12-09T10:14:34.206Z] Copying: 736/1024 [MB] (14 MBps) [2024-12-09T10:14:35.142Z] Copying: 751/1024 [MB] (15 MBps) [2024-12-09T10:14:36.077Z] Copying: 766/1024 [MB] (14 MBps) [2024-12-09T10:14:37.453Z] Copying: 781/1024 [MB] (14 MBps) [2024-12-09T10:14:38.020Z] Copying: 795/1024 [MB] (14 MBps) [2024-12-09T10:14:39.397Z] Copying: 811/1024 [MB] (15 MBps) [2024-12-09T10:14:40.334Z] Copying: 825/1024 [MB] (14 MBps) [2024-12-09T10:14:41.271Z] Copying: 840/1024 [MB] (14 MBps) [2024-12-09T10:14:42.208Z] Copying: 855/1024 [MB] (14 MBps) [2024-12-09T10:14:43.145Z] Copying: 870/1024 [MB] (14 MBps) [2024-12-09T10:14:44.081Z] Copying: 884/1024 [MB] (14 MBps) [2024-12-09T10:14:45.018Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-09T10:14:46.395Z] Copying: 914/1024 [MB] (14 MBps) [2024-12-09T10:14:47.333Z] Copying: 930/1024 [MB] (15 MBps) [2024-12-09T10:14:48.269Z] Copying: 946/1024 [MB] (15 MBps) [2024-12-09T10:14:49.205Z] Copying: 961/1024 [MB] (15 MBps) [2024-12-09T10:14:50.140Z] Copying: 976/1024 [MB] (15 MBps) [2024-12-09T10:14:51.076Z] Copying: 992/1024 [MB] (15 MBps) [2024-12-09T10:14:52.449Z] Copying: 1008/1024 [MB] (15 MBps) [2024-12-09T10:14:52.449Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-09T10:14:53.387Z] Copying: 1024/1024 [MB] (average 14 MBps) 01:03:46.343 01:03:46.343 10:14:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 01:03:46.343 10:14:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 01:03:46.602 10:14:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:03:46.861 [2024-12-09 10:14:53.850922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:46.861 [2024-12-09 10:14:53.851021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:03:46.861 [2024-12-09 10:14:53.851059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:03:46.861 [2024-12-09 10:14:53.851089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:46.861 [2024-12-09 10:14:53.851130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:03:46.861 [2024-12-09 10:14:53.855557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:46.861 [2024-12-09 10:14:53.855643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:03:46.861 [2024-12-09 10:14:53.855676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 01:03:46.861 [2024-12-09 10:14:53.855688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:46.861 [2024-12-09 10:14:53.857361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:46.861 [2024-12-09 10:14:53.857433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:03:46.861 [2024-12-09 10:14:53.857454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 01:03:46.861 [2024-12-09 10:14:53.857467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:46.861 [2024-12-09 10:14:53.875292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:46.861 [2024-12-09 10:14:53.875364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:03:46.861 [2024-12-09 10:14:53.875386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.789 ms 01:03:46.862 [2024-12-09 10:14:53.875399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:46.862 [2024-12-09 10:14:53.883247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:46.862 [2024-12-09 10:14:53.883345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:03:46.862 [2024-12-09 10:14:53.883365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.799 ms 01:03:46.862 [2024-12-09 10:14:53.883377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:53.921242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:53.921309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:03:47.122 [2024-12-09 10:14:53.921347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.746 ms 01:03:47.122 [2024-12-09 10:14:53.921360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:53.943598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:53.943659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:03:47.122 [2024-12-09 10:14:53.943684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.160 ms 01:03:47.122 [2024-12-09 10:14:53.943708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:53.944049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:53.944082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:03:47.122 [2024-12-09 10:14:53.944101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 01:03:47.122 [2024-12-09 10:14:53.944114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:53.981064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:53.981122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:03:47.122 [2024-12-09 10:14:53.981158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.919 ms 01:03:47.122 [2024-12-09 10:14:53.981185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:54.018106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:54.018154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:03:47.122 [2024-12-09 10:14:54.018176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.853 ms 01:03:47.122 [2024-12-09 10:14:54.018189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:54.053897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:54.053980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:03:47.122 [2024-12-09 10:14:54.054003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.631 ms 01:03:47.122 [2024-12-09 10:14:54.054015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:54.089547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.122 [2024-12-09 10:14:54.089606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:03:47.122 [2024-12-09 10:14:54.089627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.387 ms 01:03:47.122 [2024-12-09 10:14:54.089640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.122 [2024-12-09 10:14:54.089693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:03:47.122 [2024-12-09 10:14:54.089748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.089993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:03:47.122 [2024-12-09 10:14:54.090180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.090989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:03:47.123 [2024-12-09 10:14:54.091200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:03:47.124 [2024-12-09 10:14:54.091215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:03:47.124 [2024-12-09 10:14:54.091230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:03:47.124 [2024-12-09 10:14:54.091247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:03:47.124 [2024-12-09 10:14:54.091261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:03:47.124 [2024-12-09 10:14:54.091286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:03:47.124 [2024-12-09 10:14:54.091311] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:03:47.124 [2024-12-09 10:14:54.091327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c7a4782-9886-43d6-8da8-01df0a702c96 01:03:47.124 [2024-12-09 10:14:54.091341] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:03:47.124 [2024-12-09 10:14:54.091358] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:03:47.124 [2024-12-09 10:14:54.091373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:03:47.124 [2024-12-09 10:14:54.091388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:03:47.124 [2024-12-09 10:14:54.091400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:03:47.124 [2024-12-09 10:14:54.091414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:03:47.124 [2024-12-09 10:14:54.091426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:03:47.124 [2024-12-09 10:14:54.091439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:03:47.124 [2024-12-09 10:14:54.091451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:03:47.124 [2024-12-09 10:14:54.091466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.124 [2024-12-09 10:14:54.091478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:03:47.124 [2024-12-09 10:14:54.091494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 01:03:47.124 [2024-12-09 10:14:54.091505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.124 [2024-12-09 10:14:54.110182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.124 [2024-12-09 10:14:54.110229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:03:47.124 [2024-12-09 10:14:54.110262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.604 ms 01:03:47.124 [2024-12-09 10:14:54.110279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.124 [2024-12-09 10:14:54.110822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:47.124 [2024-12-09 10:14:54.110853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:03:47.124 [2024-12-09 10:14:54.110887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 01:03:47.124 [2024-12-09 10:14:54.110900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.178672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.178738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:03:47.383 [2024-12-09 10:14:54.178790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.178804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.178911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.178928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:03:47.383 [2024-12-09 10:14:54.178943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.178955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.179113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.179152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:03:47.383 [2024-12-09 10:14:54.179169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.179210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.179276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.179291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:03:47.383 [2024-12-09 10:14:54.179306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.179318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.306843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.306933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:03:47.383 [2024-12-09 10:14:54.306956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.306969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.409391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.409522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:03:47.383 [2024-12-09 10:14:54.409547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.409562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.409707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.409728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:03:47.383 [2024-12-09 10:14:54.409749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.409763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.409849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.409868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:03:47.383 [2024-12-09 10:14:54.409884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.409897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.410055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.410077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:03:47.383 [2024-12-09 10:14:54.410094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.410110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.410173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.410194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:03:47.383 [2024-12-09 10:14:54.410211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.410224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.410295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.410313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:03:47.383 [2024-12-09 10:14:54.410330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.410346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.410412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:03:47.383 [2024-12-09 10:14:54.410430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:03:47.383 [2024-12-09 10:14:54.410446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:03:47.383 [2024-12-09 10:14:54.410469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:47.383 [2024-12-09 10:14:54.410648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.680 ms, result 0 01:03:47.383 true 01:03:47.641 10:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81624 01:03:47.641 10:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81624 01:03:47.641 10:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 01:03:47.641 [2024-12-09 10:14:54.560167] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:03:47.641 [2024-12-09 10:14:54.560370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82588 ] 01:03:47.900 [2024-12-09 10:14:54.755305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:47.900 [2024-12-09 10:14:54.908740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:49.272  [2024-12-09T10:14:57.691Z] Copying: 143/1024 [MB] (143 MBps) [2024-12-09T10:14:58.644Z] Copying: 284/1024 [MB] (141 MBps) [2024-12-09T10:14:59.579Z] Copying: 424/1024 [MB] (139 MBps) [2024-12-09T10:15:00.514Z] Copying: 564/1024 [MB] (139 MBps) [2024-12-09T10:15:01.451Z] Copying: 706/1024 [MB] (142 MBps) [2024-12-09T10:15:02.386Z] Copying: 849/1024 [MB] (143 MBps) [2024-12-09T10:15:02.645Z] Copying: 991/1024 [MB] (142 MBps) [2024-12-09T10:15:04.024Z] Copying: 1024/1024 [MB] (average 141 MBps) 01:03:56.980 01:03:56.980 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81624 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 01:03:56.980 10:15:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:03:56.980 [2024-12-09 10:15:03.909384] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:03:56.980 [2024-12-09 10:15:03.909554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82680 ] 01:03:57.239 [2024-12-09 10:15:04.102130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:57.239 [2024-12-09 10:15:04.278135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:57.806 [2024-12-09 10:15:04.709788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:03:57.806 [2024-12-09 10:15:04.709860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:03:57.806 [2024-12-09 10:15:04.779426] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 01:03:57.806 [2024-12-09 10:15:04.779945] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:03:57.806 [2024-12-09 10:15:04.780212] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:03:58.065 [2024-12-09 10:15:05.081219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.065 [2024-12-09 10:15:05.081296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:03:58.065 [2024-12-09 10:15:05.081331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:03:58.065 [2024-12-09 10:15:05.081378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.065 [2024-12-09 10:15:05.081484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.065 [2024-12-09 10:15:05.081502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:03:58.065 [2024-12-09 10:15:05.081515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 01:03:58.065 [2024-12-09 10:15:05.081526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.065 [2024-12-09 10:15:05.081556] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:03:58.065 [2024-12-09 10:15:05.082684] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:03:58.065 [2024-12-09 10:15:05.082755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.065 [2024-12-09 10:15:05.082783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:03:58.065 [2024-12-09 10:15:05.082795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 01:03:58.065 [2024-12-09 10:15:05.082806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.065 [2024-12-09 10:15:05.085140] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:03:58.065 [2024-12-09 10:15:05.105102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.065 [2024-12-09 10:15:05.105160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:03:58.065 [2024-12-09 10:15:05.105178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.963 ms 01:03:58.065 [2024-12-09 10:15:05.105189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.065 [2024-12-09 10:15:05.105340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.065 [2024-12-09 10:15:05.105375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:03:58.065 [2024-12-09 10:15:05.105403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 01:03:58.065 [2024-12-09 10:15:05.105415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.116347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.116406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:03:58.325 [2024-12-09 10:15:05.116424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.842 ms 01:03:58.325 [2024-12-09 10:15:05.116435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.116543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.116591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:03:58.325 [2024-12-09 10:15:05.116618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 01:03:58.325 [2024-12-09 10:15:05.116643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.116753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.116771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:03:58.325 [2024-12-09 10:15:05.116784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:03:58.325 [2024-12-09 10:15:05.116795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.116828] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:03:58.325 [2024-12-09 10:15:05.122644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.122693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:03:58.325 [2024-12-09 10:15:05.122737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.824 ms 01:03:58.325 [2024-12-09 10:15:05.122756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.122834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.122863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:03:58.325 [2024-12-09 10:15:05.122890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 01:03:58.325 [2024-12-09 10:15:05.122916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.122963] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:03:58.325 [2024-12-09 10:15:05.123027] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:03:58.325 [2024-12-09 10:15:05.123093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:03:58.325 [2024-12-09 10:15:05.123113] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:03:58.325 [2024-12-09 10:15:05.123223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:03:58.325 [2024-12-09 10:15:05.123239] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:03:58.325 [2024-12-09 10:15:05.123255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:03:58.325 [2024-12-09 10:15:05.123275] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:03:58.325 [2024-12-09 10:15:05.123289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:03:58.325 [2024-12-09 10:15:05.123301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:03:58.325 [2024-12-09 10:15:05.123313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:03:58.325 [2024-12-09 10:15:05.123325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:03:58.325 [2024-12-09 10:15:05.123336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:03:58.325 [2024-12-09 10:15:05.123348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.123359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:03:58.325 [2024-12-09 10:15:05.123372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 01:03:58.325 [2024-12-09 10:15:05.123383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.325 [2024-12-09 10:15:05.123496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.325 [2024-12-09 10:15:05.123519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:03:58.325 [2024-12-09 10:15:05.123532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:03:58.325 [2024-12-09 10:15:05.123543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.123664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:03:58.326 [2024-12-09 10:15:05.123689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:03:58.326 [2024-12-09 10:15:05.123703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:03:58.326 [2024-12-09 10:15:05.123715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:03:58.326 [2024-12-09 10:15:05.123740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:03:58.326 [2024-12-09 10:15:05.123762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:03:58.326 [2024-12-09 10:15:05.123773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:03:58.326 [2024-12-09 10:15:05.123808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:03:58.326 [2024-12-09 10:15:05.123819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:03:58.326 [2024-12-09 10:15:05.123838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:03:58.326 [2024-12-09 10:15:05.123848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:03:58.326 [2024-12-09 10:15:05.123859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:03:58.326 [2024-12-09 10:15:05.123871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:03:58.326 [2024-12-09 10:15:05.123893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:03:58.326 [2024-12-09 10:15:05.123903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:03:58.326 [2024-12-09 10:15:05.123925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:03:58.326 [2024-12-09 10:15:05.123956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:03:58.326 [2024-12-09 10:15:05.123967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:03:58.326 [2024-12-09 10:15:05.123978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:03:58.326 [2024-12-09 10:15:05.123993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:03:58.326 [2024-12-09 10:15:05.124015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:03:58.326 [2024-12-09 10:15:05.124025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:03:58.326 [2024-12-09 10:15:05.124050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:03:58.326 [2024-12-09 10:15:05.124076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:03:58.326 [2024-12-09 10:15:05.124087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:03:58.326 [2024-12-09 10:15:05.124098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:03:58.326 [2024-12-09 10:15:05.124124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:03:58.326 [2024-12-09 10:15:05.124134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:03:58.326 [2024-12-09 10:15:05.124145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:03:58.326 [2024-12-09 10:15:05.124155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:03:58.326 [2024-12-09 10:15:05.124181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:03:58.326 [2024-12-09 10:15:05.124194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:03:58.326 [2024-12-09 10:15:05.124206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:03:58.326 [2024-12-09 10:15:05.124217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.124227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:03:58.326 [2024-12-09 10:15:05.124238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:03:58.326 [2024-12-09 10:15:05.124249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.124259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:03:58.326 [2024-12-09 10:15:05.124271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:03:58.326 [2024-12-09 10:15:05.124287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:03:58.326 [2024-12-09 10:15:05.124299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:03:58.326 [2024-12-09 10:15:05.124325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:03:58.326 [2024-12-09 10:15:05.124338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:03:58.326 [2024-12-09 10:15:05.124349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:03:58.326 [2024-12-09 10:15:05.124360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:03:58.326 [2024-12-09 10:15:05.124371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:03:58.326 [2024-12-09 10:15:05.124383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:03:58.326 [2024-12-09 10:15:05.124396] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:03:58.326 [2024-12-09 10:15:05.124411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:03:58.326 [2024-12-09 10:15:05.124465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:03:58.326 [2024-12-09 10:15:05.124476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:03:58.326 [2024-12-09 10:15:05.124487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:03:58.326 [2024-12-09 10:15:05.124498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:03:58.326 [2024-12-09 10:15:05.124509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:03:58.326 [2024-12-09 10:15:05.124535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:03:58.326 [2024-12-09 10:15:05.124546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:03:58.326 [2024-12-09 10:15:05.124573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:03:58.326 [2024-12-09 10:15:05.124585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:03:58.326 [2024-12-09 10:15:05.124645] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:03:58.326 [2024-12-09 10:15:05.124659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:03:58.326 [2024-12-09 10:15:05.124684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:03:58.326 [2024-12-09 10:15:05.124696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:03:58.326 [2024-12-09 10:15:05.124708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:03:58.326 [2024-12-09 10:15:05.124721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.124732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:03:58.326 [2024-12-09 10:15:05.124744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 01:03:58.326 [2024-12-09 10:15:05.124756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.173812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.173874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:03:58.326 [2024-12-09 10:15:05.173895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.981 ms 01:03:58.326 [2024-12-09 10:15:05.173909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.174058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.174076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:03:58.326 [2024-12-09 10:15:05.174090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 01:03:58.326 [2024-12-09 10:15:05.174103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.240945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.241025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:03:58.326 [2024-12-09 10:15:05.241051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.721 ms 01:03:58.326 [2024-12-09 10:15:05.241064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.241146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.241164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:03:58.326 [2024-12-09 10:15:05.241179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:03:58.326 [2024-12-09 10:15:05.241191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.241987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.242019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:03:58.326 [2024-12-09 10:15:05.242034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 01:03:58.326 [2024-12-09 10:15:05.242051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.326 [2024-12-09 10:15:05.242226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.326 [2024-12-09 10:15:05.242245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:03:58.326 [2024-12-09 10:15:05.242274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 01:03:58.327 [2024-12-09 10:15:05.242286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.327 [2024-12-09 10:15:05.265527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.327 [2024-12-09 10:15:05.265567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:03:58.327 [2024-12-09 10:15:05.265583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.210 ms 01:03:58.327 [2024-12-09 10:15:05.265596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.327 [2024-12-09 10:15:05.286317] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:03:58.327 [2024-12-09 10:15:05.286400] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:03:58.327 [2024-12-09 10:15:05.286422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.327 [2024-12-09 10:15:05.286435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:03:58.327 [2024-12-09 10:15:05.286449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.643 ms 01:03:58.327 [2024-12-09 10:15:05.286462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.327 [2024-12-09 10:15:05.320834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.327 [2024-12-09 10:15:05.320913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:03:58.327 [2024-12-09 10:15:05.320934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.281 ms 01:03:58.327 [2024-12-09 10:15:05.320947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.327 [2024-12-09 10:15:05.338674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.327 [2024-12-09 10:15:05.338738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:03:58.327 [2024-12-09 10:15:05.338758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.599 ms 01:03:58.327 [2024-12-09 10:15:05.338770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.327 [2024-12-09 10:15:05.354648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.327 [2024-12-09 10:15:05.354705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:03:58.327 [2024-12-09 10:15:05.354724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.804 ms 01:03:58.327 [2024-12-09 10:15:05.354752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.327 [2024-12-09 10:15:05.355736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.327 [2024-12-09 10:15:05.355774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:03:58.327 [2024-12-09 10:15:05.355791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 01:03:58.327 [2024-12-09 10:15:05.355825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.585 [2024-12-09 10:15:05.439868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.585 [2024-12-09 10:15:05.439948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:03:58.585 [2024-12-09 10:15:05.439971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.006 ms 01:03:58.585 [2024-12-09 10:15:05.439984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.585 [2024-12-09 10:15:05.454012] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:03:58.585 [2024-12-09 10:15:05.458229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.585 [2024-12-09 10:15:05.458291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:03:58.585 [2024-12-09 10:15:05.458325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.156 ms 01:03:58.585 [2024-12-09 10:15:05.458346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.585 [2024-12-09 10:15:05.458516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.585 [2024-12-09 10:15:05.458556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:03:58.585 [2024-12-09 10:15:05.458571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:03:58.585 [2024-12-09 10:15:05.458582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.585 [2024-12-09 10:15:05.458698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.585 [2024-12-09 10:15:05.458719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:03:58.585 [2024-12-09 10:15:05.458733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:03:58.585 [2024-12-09 10:15:05.458745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.586 [2024-12-09 10:15:05.458786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.586 [2024-12-09 10:15:05.458801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:03:58.586 [2024-12-09 10:15:05.458814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:03:58.586 [2024-12-09 10:15:05.458826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.586 [2024-12-09 10:15:05.458882] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:03:58.586 [2024-12-09 10:15:05.458900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.586 [2024-12-09 10:15:05.458913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:03:58.586 [2024-12-09 10:15:05.458930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:03:58.586 [2024-12-09 10:15:05.458948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.586 [2024-12-09 10:15:05.493497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.586 [2024-12-09 10:15:05.493747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:03:58.586 [2024-12-09 10:15:05.493866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.516 ms 01:03:58.586 [2024-12-09 10:15:05.493916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.586 [2024-12-09 10:15:05.494134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:03:58.586 [2024-12-09 10:15:05.494194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:03:58.586 [2024-12-09 10:15:05.494391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:03:58.586 [2024-12-09 10:15:05.494446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:03:58.586 [2024-12-09 10:15:05.495924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.164 ms, result 0 01:03:59.524  [2024-12-09T10:15:07.945Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-09T10:15:08.512Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-09T10:15:09.891Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-09T10:15:10.827Z] Copying: 101/1024 [MB] (25 MBps) [2024-12-09T10:15:11.764Z] Copying: 127/1024 [MB] (25 MBps) [2024-12-09T10:15:12.700Z] Copying: 153/1024 [MB] (26 MBps) [2024-12-09T10:15:13.635Z] Copying: 179/1024 [MB] (26 MBps) [2024-12-09T10:15:14.569Z] Copying: 205/1024 [MB] (25 MBps) [2024-12-09T10:15:15.946Z] Copying: 230/1024 [MB] (25 MBps) [2024-12-09T10:15:16.537Z] Copying: 255/1024 [MB] (25 MBps) [2024-12-09T10:15:17.914Z] Copying: 280/1024 [MB] (24 MBps) [2024-12-09T10:15:18.852Z] Copying: 305/1024 [MB] (24 MBps) [2024-12-09T10:15:19.801Z] Copying: 331/1024 [MB] (25 MBps) [2024-12-09T10:15:20.738Z] Copying: 356/1024 [MB] (25 MBps) [2024-12-09T10:15:21.674Z] Copying: 381/1024 [MB] (25 MBps) [2024-12-09T10:15:22.610Z] Copying: 405/1024 [MB] (23 MBps) [2024-12-09T10:15:23.546Z] Copying: 430/1024 [MB] (24 MBps) [2024-12-09T10:15:24.935Z] Copying: 454/1024 [MB] (24 MBps) [2024-12-09T10:15:25.871Z] Copying: 480/1024 [MB] (25 MBps) [2024-12-09T10:15:26.806Z] Copying: 505/1024 [MB] (25 MBps) [2024-12-09T10:15:27.742Z] Copying: 531/1024 [MB] (25 MBps) [2024-12-09T10:15:28.678Z] Copying: 555/1024 [MB] (24 MBps) [2024-12-09T10:15:29.613Z] Copying: 578/1024 [MB] (22 MBps) [2024-12-09T10:15:30.547Z] Copying: 600/1024 [MB] (22 MBps) [2024-12-09T10:15:31.923Z] Copying: 623/1024 [MB] (22 MBps) [2024-12-09T10:15:32.860Z] Copying: 645/1024 [MB] (22 MBps) [2024-12-09T10:15:33.797Z] Copying: 668/1024 [MB] (22 MBps) [2024-12-09T10:15:34.733Z] Copying: 691/1024 [MB] (22 MBps) [2024-12-09T10:15:35.679Z] Copying: 713/1024 [MB] (22 MBps) [2024-12-09T10:15:36.618Z] Copying: 736/1024 [MB] (22 MBps) [2024-12-09T10:15:37.554Z] Copying: 758/1024 [MB] (22 MBps) [2024-12-09T10:15:38.931Z] Copying: 781/1024 [MB] (22 MBps) [2024-12-09T10:15:39.866Z] Copying: 803/1024 [MB] (22 MBps) [2024-12-09T10:15:40.802Z] Copying: 827/1024 [MB] (23 MBps) [2024-12-09T10:15:41.737Z] Copying: 851/1024 [MB] (23 MBps) [2024-12-09T10:15:42.672Z] Copying: 874/1024 [MB] (23 MBps) [2024-12-09T10:15:43.608Z] Copying: 897/1024 [MB] (23 MBps) [2024-12-09T10:15:44.544Z] Copying: 920/1024 [MB] (22 MBps) [2024-12-09T10:15:45.920Z] Copying: 943/1024 [MB] (22 MBps) [2024-12-09T10:15:46.857Z] Copying: 967/1024 [MB] (23 MBps) [2024-12-09T10:15:47.793Z] Copying: 990/1024 [MB] (23 MBps) [2024-12-09T10:15:48.740Z] Copying: 1012/1024 [MB] (22 MBps) [2024-12-09T10:15:49.714Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-09T10:15:49.714Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 10:15:49.377600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.377673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:04:42.671 [2024-12-09 10:15:49.377712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:04:42.671 [2024-12-09 10:15:49.377726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.379002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:04:42.671 [2024-12-09 10:15:49.385533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.385578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:04:42.671 [2024-12-09 10:15:49.385596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.472 ms 01:04:42.671 [2024-12-09 10:15:49.385618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.401185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.401242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:04:42.671 [2024-12-09 10:15:49.401342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.334 ms 01:04:42.671 [2024-12-09 10:15:49.401370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.425843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.425889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:04:42.671 [2024-12-09 10:15:49.425938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.435 ms 01:04:42.671 [2024-12-09 10:15:49.425950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.433328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.433426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:04:42.671 [2024-12-09 10:15:49.433460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.289 ms 01:04:42.671 [2024-12-09 10:15:49.433471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.467860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.468107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:04:42.671 [2024-12-09 10:15:49.468150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.337 ms 01:04:42.671 [2024-12-09 10:15:49.468163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.487825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.487883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:04:42.671 [2024-12-09 10:15:49.487916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.617 ms 01:04:42.671 [2024-12-09 10:15:49.487927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.609670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.609735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:04:42.671 [2024-12-09 10:15:49.609807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.682 ms 01:04:42.671 [2024-12-09 10:15:49.609823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.644422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.644477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:04:42.671 [2024-12-09 10:15:49.644509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.561 ms 01:04:42.671 [2024-12-09 10:15:49.644550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.677213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.677267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:04:42.671 [2024-12-09 10:15:49.677284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.605 ms 01:04:42.671 [2024-12-09 10:15:49.677295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.671 [2024-12-09 10:15:49.709824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.671 [2024-12-09 10:15:49.709906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:04:42.671 [2024-12-09 10:15:49.709937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.489 ms 01:04:42.671 [2024-12-09 10:15:49.709948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.930 [2024-12-09 10:15:49.742380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.930 [2024-12-09 10:15:49.742432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:04:42.930 [2024-12-09 10:15:49.742446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.337 ms 01:04:42.930 [2024-12-09 10:15:49.742457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.930 [2024-12-09 10:15:49.742495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:04:42.930 [2024-12-09 10:15:49.742547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 01:04:42.930 [2024-12-09 10:15:49.742561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.742993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:04:42.930 [2024-12-09 10:15:49.743178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:04:42.931 [2024-12-09 10:15:49.743867] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:04:42.931 [2024-12-09 10:15:49.743877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c7a4782-9886-43d6-8da8-01df0a702c96 01:04:42.931 [2024-12-09 10:15:49.743903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 01:04:42.931 [2024-12-09 10:15:49.743914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 01:04:42.931 [2024-12-09 10:15:49.743924] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 01:04:42.931 [2024-12-09 10:15:49.743935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 01:04:42.931 [2024-12-09 10:15:49.743945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:04:42.931 [2024-12-09 10:15:49.743956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:04:42.931 [2024-12-09 10:15:49.743966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:04:42.931 [2024-12-09 10:15:49.743975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:04:42.931 [2024-12-09 10:15:49.743984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:04:42.931 [2024-12-09 10:15:49.743995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.931 [2024-12-09 10:15:49.744021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:04:42.931 [2024-12-09 10:15:49.744032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 01:04:42.931 [2024-12-09 10:15:49.744043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.762807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.931 [2024-12-09 10:15:49.762857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:04:42.931 [2024-12-09 10:15:49.762872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.679 ms 01:04:42.931 [2024-12-09 10:15:49.762883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.763439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:42.931 [2024-12-09 10:15:49.763479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:04:42.931 [2024-12-09 10:15:49.763515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 01:04:42.931 [2024-12-09 10:15:49.763541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.812040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:42.931 [2024-12-09 10:15:49.812096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:04:42.931 [2024-12-09 10:15:49.812126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:42.931 [2024-12-09 10:15:49.812137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.812197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:42.931 [2024-12-09 10:15:49.812241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:04:42.931 [2024-12-09 10:15:49.812259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:42.931 [2024-12-09 10:15:49.812270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.812374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:42.931 [2024-12-09 10:15:49.812394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:04:42.931 [2024-12-09 10:15:49.812406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:42.931 [2024-12-09 10:15:49.812433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.812456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:42.931 [2024-12-09 10:15:49.812469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:04:42.931 [2024-12-09 10:15:49.812480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:42.931 [2024-12-09 10:15:49.812491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:42.931 [2024-12-09 10:15:49.929017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:42.931 [2024-12-09 10:15:49.929090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:04:42.931 [2024-12-09 10:15:49.929108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:42.931 [2024-12-09 10:15:49.929121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.022500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.022570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:04:43.190 [2024-12-09 10:15:50.022588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.022622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.022774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.022791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:04:43.190 [2024-12-09 10:15:50.022804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.022815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.022859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.022873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:04:43.190 [2024-12-09 10:15:50.022901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.022912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.023077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.023094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:04:43.190 [2024-12-09 10:15:50.023106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.023118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.023177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.023194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:04:43.190 [2024-12-09 10:15:50.023222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.023232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.023328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.023343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:04:43.190 [2024-12-09 10:15:50.023355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.023366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.023417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:04:43.190 [2024-12-09 10:15:50.023433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:04:43.190 [2024-12-09 10:15:50.023461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:04:43.190 [2024-12-09 10:15:50.023549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:43.190 [2024-12-09 10:15:50.023743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 649.404 ms, result 0 01:04:44.566 01:04:44.566 01:04:44.824 10:15:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 01:04:47.357 10:15:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:04:47.357 [2024-12-09 10:15:54.139723] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:04:47.357 [2024-12-09 10:15:54.139954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83163 ] 01:04:47.357 [2024-12-09 10:15:54.320816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:47.616 [2024-12-09 10:15:54.460999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:47.874 [2024-12-09 10:15:54.873047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:04:47.874 [2024-12-09 10:15:54.873188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:04:48.134 [2024-12-09 10:15:55.041723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.041792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:04:48.134 [2024-12-09 10:15:55.041835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:04:48.134 [2024-12-09 10:15:55.041854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.041977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.042045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:04:48.134 [2024-12-09 10:15:55.042067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 01:04:48.134 [2024-12-09 10:15:55.042084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.042163] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:04:48.134 [2024-12-09 10:15:55.043418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:04:48.134 [2024-12-09 10:15:55.043512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.043535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:04:48.134 [2024-12-09 10:15:55.043570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 01:04:48.134 [2024-12-09 10:15:55.043588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.045976] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:04:48.134 [2024-12-09 10:15:55.064012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.064072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:04:48.134 [2024-12-09 10:15:55.064097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.038 ms 01:04:48.134 [2024-12-09 10:15:55.064116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.064222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.064261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:04:48.134 [2024-12-09 10:15:55.064285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 01:04:48.134 [2024-12-09 10:15:55.064318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.075168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.075243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:04:48.134 [2024-12-09 10:15:55.075326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.679 ms 01:04:48.134 [2024-12-09 10:15:55.075354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.075472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.075514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:04:48.134 [2024-12-09 10:15:55.075565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 01:04:48.134 [2024-12-09 10:15:55.075597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.075702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.075729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:04:48.134 [2024-12-09 10:15:55.075750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:04:48.134 [2024-12-09 10:15:55.075769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.075858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:04:48.134 [2024-12-09 10:15:55.081524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.081580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:04:48.134 [2024-12-09 10:15:55.081624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.678 ms 01:04:48.134 [2024-12-09 10:15:55.081642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.081704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.081728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:04:48.134 [2024-12-09 10:15:55.081748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:04:48.134 [2024-12-09 10:15:55.081780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.081907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:04:48.134 [2024-12-09 10:15:55.081954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:04:48.134 [2024-12-09 10:15:55.082044] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:04:48.134 [2024-12-09 10:15:55.082091] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:04:48.134 [2024-12-09 10:15:55.082236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:04:48.134 [2024-12-09 10:15:55.082295] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:04:48.134 [2024-12-09 10:15:55.082337] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:04:48.134 [2024-12-09 10:15:55.082362] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:04:48.134 [2024-12-09 10:15:55.082383] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:04:48.134 [2024-12-09 10:15:55.082404] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:04:48.134 [2024-12-09 10:15:55.082421] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:04:48.134 [2024-12-09 10:15:55.082461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:04:48.134 [2024-12-09 10:15:55.082493] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:04:48.134 [2024-12-09 10:15:55.082511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.082529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:04:48.134 [2024-12-09 10:15:55.082547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 01:04:48.134 [2024-12-09 10:15:55.082566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.082685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.134 [2024-12-09 10:15:55.082726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:04:48.134 [2024-12-09 10:15:55.082747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 01:04:48.134 [2024-12-09 10:15:55.082764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.134 [2024-12-09 10:15:55.082933] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:04:48.134 [2024-12-09 10:15:55.082991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:04:48.134 [2024-12-09 10:15:55.083026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:04:48.134 [2024-12-09 10:15:55.083076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:04:48.135 [2024-12-09 10:15:55.083113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:04:48.135 [2024-12-09 10:15:55.083176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:04:48.135 [2024-12-09 10:15:55.083210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:04:48.135 [2024-12-09 10:15:55.083228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:04:48.135 [2024-12-09 10:15:55.083245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:04:48.135 [2024-12-09 10:15:55.083295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:04:48.135 [2024-12-09 10:15:55.083336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:04:48.135 [2024-12-09 10:15:55.083358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:04:48.135 [2024-12-09 10:15:55.083424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:04:48.135 [2024-12-09 10:15:55.083478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:04:48.135 [2024-12-09 10:15:55.083532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:04:48.135 [2024-12-09 10:15:55.083584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:04:48.135 [2024-12-09 10:15:55.083640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:04:48.135 [2024-12-09 10:15:55.083692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:04:48.135 [2024-12-09 10:15:55.083730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:04:48.135 [2024-12-09 10:15:55.083748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:04:48.135 [2024-12-09 10:15:55.083766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:04:48.135 [2024-12-09 10:15:55.083813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:04:48.135 [2024-12-09 10:15:55.083832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:04:48.135 [2024-12-09 10:15:55.083853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:04:48.135 [2024-12-09 10:15:55.083904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:04:48.135 [2024-12-09 10:15:55.083922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.083940] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:04:48.135 [2024-12-09 10:15:55.083958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:04:48.135 [2024-12-09 10:15:55.083976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:04:48.135 [2024-12-09 10:15:55.083998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:04:48.135 [2024-12-09 10:15:55.084018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:04:48.135 [2024-12-09 10:15:55.084036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:04:48.135 [2024-12-09 10:15:55.084055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:04:48.135 [2024-12-09 10:15:55.084074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:04:48.135 [2024-12-09 10:15:55.084092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:04:48.135 [2024-12-09 10:15:55.084124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:04:48.135 [2024-12-09 10:15:55.084166] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:04:48.135 [2024-12-09 10:15:55.084227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:04:48.135 [2024-12-09 10:15:55.084313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:04:48.135 [2024-12-09 10:15:55.084333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:04:48.135 [2024-12-09 10:15:55.084353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:04:48.135 [2024-12-09 10:15:55.084374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:04:48.135 [2024-12-09 10:15:55.084392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:04:48.135 [2024-12-09 10:15:55.084412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:04:48.135 [2024-12-09 10:15:55.084431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:04:48.135 [2024-12-09 10:15:55.084450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:04:48.135 [2024-12-09 10:15:55.084469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:04:48.135 [2024-12-09 10:15:55.084579] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:04:48.135 [2024-12-09 10:15:55.084602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:04:48.135 [2024-12-09 10:15:55.084656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:04:48.135 [2024-12-09 10:15:55.084674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:04:48.135 [2024-12-09 10:15:55.084692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:04:48.135 [2024-12-09 10:15:55.084727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.135 [2024-12-09 10:15:55.084745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:04:48.135 [2024-12-09 10:15:55.084764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 01:04:48.135 [2024-12-09 10:15:55.084799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.135 [2024-12-09 10:15:55.139359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.135 [2024-12-09 10:15:55.139448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:04:48.135 [2024-12-09 10:15:55.139481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.462 ms 01:04:48.135 [2024-12-09 10:15:55.139511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.135 [2024-12-09 10:15:55.139690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.135 [2024-12-09 10:15:55.139749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:04:48.135 [2024-12-09 10:15:55.139772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 01:04:48.135 [2024-12-09 10:15:55.139792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.394 [2024-12-09 10:15:55.201432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.394 [2024-12-09 10:15:55.201521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:04:48.394 [2024-12-09 10:15:55.201579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.454 ms 01:04:48.394 [2024-12-09 10:15:55.201597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.394 [2024-12-09 10:15:55.201686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.394 [2024-12-09 10:15:55.201719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:04:48.394 [2024-12-09 10:15:55.201740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:04:48.394 [2024-12-09 10:15:55.201759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.394 [2024-12-09 10:15:55.202662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.394 [2024-12-09 10:15:55.202728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:04:48.394 [2024-12-09 10:15:55.202766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 01:04:48.394 [2024-12-09 10:15:55.202785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.394 [2024-12-09 10:15:55.203103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.394 [2024-12-09 10:15:55.203142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:04:48.394 [2024-12-09 10:15:55.203178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 01:04:48.394 [2024-12-09 10:15:55.203197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.225445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.225507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:04:48.395 [2024-12-09 10:15:55.225532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.188 ms 01:04:48.395 [2024-12-09 10:15:55.225565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.244063] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 01:04:48.395 [2024-12-09 10:15:55.244125] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:04:48.395 [2024-12-09 10:15:55.244152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.244172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:04:48.395 [2024-12-09 10:15:55.244193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.324 ms 01:04:48.395 [2024-12-09 10:15:55.244210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.276075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.276135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:04:48.395 [2024-12-09 10:15:55.276176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.699 ms 01:04:48.395 [2024-12-09 10:15:55.276196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.293175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.293245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:04:48.395 [2024-12-09 10:15:55.293279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.852 ms 01:04:48.395 [2024-12-09 10:15:55.293301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.310146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.310208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:04:48.395 [2024-12-09 10:15:55.310233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.789 ms 01:04:48.395 [2024-12-09 10:15:55.310282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.311385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.311490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:04:48.395 [2024-12-09 10:15:55.311538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 01:04:48.395 [2024-12-09 10:15:55.311559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.396475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.396594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:04:48.395 [2024-12-09 10:15:55.396634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.838 ms 01:04:48.395 [2024-12-09 10:15:55.396667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.410513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:04:48.395 [2024-12-09 10:15:55.413657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.413724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:04:48.395 [2024-12-09 10:15:55.413749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.882 ms 01:04:48.395 [2024-12-09 10:15:55.413768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.414063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.414096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:04:48.395 [2024-12-09 10:15:55.414127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:04:48.395 [2024-12-09 10:15:55.414147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.416598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.416653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:04:48.395 [2024-12-09 10:15:55.416677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.358 ms 01:04:48.395 [2024-12-09 10:15:55.416710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.416764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.416788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:04:48.395 [2024-12-09 10:15:55.416806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:04:48.395 [2024-12-09 10:15:55.416823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.395 [2024-12-09 10:15:55.416964] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:04:48.395 [2024-12-09 10:15:55.416993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.395 [2024-12-09 10:15:55.417012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:04:48.395 [2024-12-09 10:15:55.417030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 01:04:48.395 [2024-12-09 10:15:55.417049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.654 [2024-12-09 10:15:55.451350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.654 [2024-12-09 10:15:55.451432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:04:48.654 [2024-12-09 10:15:55.451483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.257 ms 01:04:48.654 [2024-12-09 10:15:55.451504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.654 [2024-12-09 10:15:55.451648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:04:48.654 [2024-12-09 10:15:55.451705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:04:48.654 [2024-12-09 10:15:55.451743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 01:04:48.654 [2024-12-09 10:15:55.451763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:04:48.654 [2024-12-09 10:15:55.456223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.729 ms, result 0 01:04:50.031  [2024-12-09T10:15:58.011Z] Copying: 808/1048576 [kB] (808 kBps) [2024-12-09T10:15:58.949Z] Copying: 3640/1048576 [kB] (2832 kBps) [2024-12-09T10:15:59.885Z] Copying: 16/1024 [MB] (12 MBps) [2024-12-09T10:16:00.821Z] Copying: 41/1024 [MB] (25 MBps) [2024-12-09T10:16:01.756Z] Copying: 66/1024 [MB] (24 MBps) [2024-12-09T10:16:03.134Z] Copying: 91/1024 [MB] (25 MBps) [2024-12-09T10:16:03.702Z] Copying: 116/1024 [MB] (25 MBps) [2024-12-09T10:16:05.083Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-09T10:16:06.017Z] Copying: 164/1024 [MB] (23 MBps) [2024-12-09T10:16:06.962Z] Copying: 188/1024 [MB] (24 MBps) [2024-12-09T10:16:07.914Z] Copying: 212/1024 [MB] (23 MBps) [2024-12-09T10:16:08.851Z] Copying: 236/1024 [MB] (24 MBps) [2024-12-09T10:16:09.789Z] Copying: 260/1024 [MB] (24 MBps) [2024-12-09T10:16:10.748Z] Copying: 285/1024 [MB] (24 MBps) [2024-12-09T10:16:12.123Z] Copying: 311/1024 [MB] (26 MBps) [2024-12-09T10:16:12.699Z] Copying: 336/1024 [MB] (24 MBps) [2024-12-09T10:16:14.076Z] Copying: 362/1024 [MB] (25 MBps) [2024-12-09T10:16:15.051Z] Copying: 388/1024 [MB] (25 MBps) [2024-12-09T10:16:15.993Z] Copying: 412/1024 [MB] (24 MBps) [2024-12-09T10:16:16.930Z] Copying: 438/1024 [MB] (25 MBps) [2024-12-09T10:16:17.867Z] Copying: 463/1024 [MB] (25 MBps) [2024-12-09T10:16:18.804Z] Copying: 489/1024 [MB] (25 MBps) [2024-12-09T10:16:19.740Z] Copying: 515/1024 [MB] (26 MBps) [2024-12-09T10:16:21.119Z] Copying: 542/1024 [MB] (26 MBps) [2024-12-09T10:16:22.056Z] Copying: 568/1024 [MB] (25 MBps) [2024-12-09T10:16:23.026Z] Copying: 594/1024 [MB] (26 MBps) [2024-12-09T10:16:23.962Z] Copying: 621/1024 [MB] (26 MBps) [2024-12-09T10:16:24.897Z] Copying: 650/1024 [MB] (28 MBps) [2024-12-09T10:16:25.835Z] Copying: 678/1024 [MB] (28 MBps) [2024-12-09T10:16:26.769Z] Copying: 707/1024 [MB] (28 MBps) [2024-12-09T10:16:27.704Z] Copying: 734/1024 [MB] (27 MBps) [2024-12-09T10:16:29.078Z] Copying: 763/1024 [MB] (28 MBps) [2024-12-09T10:16:30.013Z] Copying: 790/1024 [MB] (27 MBps) [2024-12-09T10:16:30.947Z] Copying: 818/1024 [MB] (27 MBps) [2024-12-09T10:16:31.882Z] Copying: 845/1024 [MB] (27 MBps) [2024-12-09T10:16:32.816Z] Copying: 874/1024 [MB] (28 MBps) [2024-12-09T10:16:33.751Z] Copying: 902/1024 [MB] (28 MBps) [2024-12-09T10:16:35.127Z] Copying: 931/1024 [MB] (29 MBps) [2024-12-09T10:16:36.063Z] Copying: 961/1024 [MB] (29 MBps) [2024-12-09T10:16:37.000Z] Copying: 989/1024 [MB] (28 MBps) [2024-12-09T10:16:37.000Z] Copying: 1018/1024 [MB] (28 MBps) [2024-12-09T10:16:37.567Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 10:16:37.552542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.523 [2024-12-09 10:16:37.552666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:05:30.523 [2024-12-09 10:16:37.552710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:05:30.523 [2024-12-09 10:16:37.552726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.523 [2024-12-09 10:16:37.552773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:05:30.523 [2024-12-09 10:16:37.558463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.523 [2024-12-09 10:16:37.558509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:05:30.523 [2024-12-09 10:16:37.558555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.660 ms 01:05:30.523 [2024-12-09 10:16:37.558567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.523 [2024-12-09 10:16:37.558916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.523 [2024-12-09 10:16:37.558949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:05:30.523 [2024-12-09 10:16:37.558963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 01:05:30.523 [2024-12-09 10:16:37.558975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.571794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.571848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:05:30.783 [2024-12-09 10:16:37.571867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.787 ms 01:05:30.783 [2024-12-09 10:16:37.571880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.579743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.579793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:05:30.783 [2024-12-09 10:16:37.579832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.814 ms 01:05:30.783 [2024-12-09 10:16:37.579847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.617222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.617336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:05:30.783 [2024-12-09 10:16:37.617354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.325 ms 01:05:30.783 [2024-12-09 10:16:37.617366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.637217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.637289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:05:30.783 [2024-12-09 10:16:37.637338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.798 ms 01:05:30.783 [2024-12-09 10:16:37.637366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.639129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.639173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:05:30.783 [2024-12-09 10:16:37.639189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 01:05:30.783 [2024-12-09 10:16:37.639209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.674989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.675051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:05:30.783 [2024-12-09 10:16:37.675068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.758 ms 01:05:30.783 [2024-12-09 10:16:37.675079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.709055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.709112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:05:30.783 [2024-12-09 10:16:37.709128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.933 ms 01:05:30.783 [2024-12-09 10:16:37.709155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.744105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.744172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:05:30.783 [2024-12-09 10:16:37.744188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.908 ms 01:05:30.783 [2024-12-09 10:16:37.744200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.779936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.783 [2024-12-09 10:16:37.780001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:05:30.783 [2024-12-09 10:16:37.780034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.624 ms 01:05:30.783 [2024-12-09 10:16:37.780045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.783 [2024-12-09 10:16:37.780087] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:05:30.783 [2024-12-09 10:16:37.780110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:05:30.783 [2024-12-09 10:16:37.780124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 01:05:30.783 [2024-12-09 10:16:37.780137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:05:30.783 [2024-12-09 10:16:37.780149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:05:30.783 [2024-12-09 10:16:37.780161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.780989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:05:30.784 [2024-12-09 10:16:37.781363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:05:30.785 [2024-12-09 10:16:37.781382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:05:30.785 [2024-12-09 10:16:37.781394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:05:30.785 [2024-12-09 10:16:37.781411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:05:30.785 [2024-12-09 10:16:37.781439] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:05:30.785 [2024-12-09 10:16:37.781450] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c7a4782-9886-43d6-8da8-01df0a702c96 01:05:30.785 [2024-12-09 10:16:37.781463] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 01:05:30.785 [2024-12-09 10:16:37.781473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 134592 01:05:30.785 [2024-12-09 10:16:37.781491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 132608 01:05:30.785 [2024-12-09 10:16:37.781503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0150 01:05:30.785 [2024-12-09 10:16:37.781513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:05:30.785 [2024-12-09 10:16:37.781536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:05:30.785 [2024-12-09 10:16:37.781548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:05:30.785 [2024-12-09 10:16:37.781558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:05:30.785 [2024-12-09 10:16:37.781568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:05:30.785 [2024-12-09 10:16:37.781578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.785 [2024-12-09 10:16:37.781590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:05:30.785 [2024-12-09 10:16:37.781602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 01:05:30.785 [2024-12-09 10:16:37.781613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.785 [2024-12-09 10:16:37.801255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.785 [2024-12-09 10:16:37.801348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:05:30.785 [2024-12-09 10:16:37.801383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.598 ms 01:05:30.785 [2024-12-09 10:16:37.801394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:30.785 [2024-12-09 10:16:37.801859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:30.785 [2024-12-09 10:16:37.801884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:05:30.785 [2024-12-09 10:16:37.801898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 01:05:30.785 [2024-12-09 10:16:37.801910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:37.854954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:37.855033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:05:31.043 [2024-12-09 10:16:37.855049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:37.855072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:37.855143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:37.855158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:05:31.043 [2024-12-09 10:16:37.855169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:37.855181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:37.855289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:37.855309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:05:31.043 [2024-12-09 10:16:37.855322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:37.855333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:37.855365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:37.855379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:05:31.043 [2024-12-09 10:16:37.855390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:37.855401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:37.985342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:37.985429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:05:31.043 [2024-12-09 10:16:37.985449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:37.985461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:38.085203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:38.085312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:05:31.043 [2024-12-09 10:16:38.085349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:38.085361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:38.085519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.043 [2024-12-09 10:16:38.085540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:05:31.043 [2024-12-09 10:16:38.085552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.043 [2024-12-09 10:16:38.085564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.043 [2024-12-09 10:16:38.085640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.044 [2024-12-09 10:16:38.085655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:05:31.044 [2024-12-09 10:16:38.085668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.044 [2024-12-09 10:16:38.085678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.044 [2024-12-09 10:16:38.085859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.044 [2024-12-09 10:16:38.085903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:05:31.044 [2024-12-09 10:16:38.085930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.044 [2024-12-09 10:16:38.085942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.044 [2024-12-09 10:16:38.085990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.044 [2024-12-09 10:16:38.086017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:05:31.044 [2024-12-09 10:16:38.086031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.044 [2024-12-09 10:16:38.086041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.044 [2024-12-09 10:16:38.086087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.044 [2024-12-09 10:16:38.086102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:05:31.044 [2024-12-09 10:16:38.086120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.044 [2024-12-09 10:16:38.086131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.044 [2024-12-09 10:16:38.086193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:05:31.044 [2024-12-09 10:16:38.086209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:05:31.044 [2024-12-09 10:16:38.086221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:05:31.044 [2024-12-09 10:16:38.086232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:31.044 [2024-12-09 10:16:38.086396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.843 ms, result 0 01:05:32.419 01:05:32.419 01:05:32.419 10:16:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:05:34.951 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:05:34.951 10:16:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:05:35.209 [2024-12-09 10:16:42.004439] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:05:35.209 [2024-12-09 10:16:42.004713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83629 ] 01:05:35.209 [2024-12-09 10:16:42.197125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:35.467 [2024-12-09 10:16:42.347657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:35.729 [2024-12-09 10:16:42.766213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:05:35.729 [2024-12-09 10:16:42.766298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:05:35.989 [2024-12-09 10:16:42.938336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.989 [2024-12-09 10:16:42.938416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:05:35.989 [2024-12-09 10:16:42.938437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:05:35.989 [2024-12-09 10:16:42.938449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.989 [2024-12-09 10:16:42.938519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.989 [2024-12-09 10:16:42.938547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:05:35.989 [2024-12-09 10:16:42.938561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:05:35.989 [2024-12-09 10:16:42.938572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.989 [2024-12-09 10:16:42.938605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:05:35.989 [2024-12-09 10:16:42.939528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:05:35.989 [2024-12-09 10:16:42.939565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.989 [2024-12-09 10:16:42.939578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:05:35.989 [2024-12-09 10:16:42.939590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 01:05:35.989 [2024-12-09 10:16:42.939601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.989 [2024-12-09 10:16:42.941757] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:05:35.989 [2024-12-09 10:16:42.961172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.989 [2024-12-09 10:16:42.961215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:05:35.989 [2024-12-09 10:16:42.961248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.417 ms 01:05:35.989 [2024-12-09 10:16:42.961260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.989 [2024-12-09 10:16:42.961376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.989 [2024-12-09 10:16:42.961396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:05:35.989 [2024-12-09 10:16:42.961409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 01:05:35.989 [2024-12-09 10:16:42.961420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.972009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.972054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:05:35.990 [2024-12-09 10:16:42.972102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.478 ms 01:05:35.990 [2024-12-09 10:16:42.972135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.972250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.972269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:05:35.990 [2024-12-09 10:16:42.972282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 01:05:35.990 [2024-12-09 10:16:42.972293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.972391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.972412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:05:35.990 [2024-12-09 10:16:42.972425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:05:35.990 [2024-12-09 10:16:42.972437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.972478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:05:35.990 [2024-12-09 10:16:42.978188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.978227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:05:35.990 [2024-12-09 10:16:42.978257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.718 ms 01:05:35.990 [2024-12-09 10:16:42.978272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.978317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.978335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:05:35.990 [2024-12-09 10:16:42.978358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:05:35.990 [2024-12-09 10:16:42.978375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.978431] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:05:35.990 [2024-12-09 10:16:42.978465] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:05:35.990 [2024-12-09 10:16:42.978509] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:05:35.990 [2024-12-09 10:16:42.978541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:05:35.990 [2024-12-09 10:16:42.978651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:05:35.990 [2024-12-09 10:16:42.978666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:05:35.990 [2024-12-09 10:16:42.978681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:05:35.990 [2024-12-09 10:16:42.978696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:05:35.990 [2024-12-09 10:16:42.978710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:05:35.990 [2024-12-09 10:16:42.978722] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:05:35.990 [2024-12-09 10:16:42.978733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:05:35.990 [2024-12-09 10:16:42.978749] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:05:35.990 [2024-12-09 10:16:42.978760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:05:35.990 [2024-12-09 10:16:42.978772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.978784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:05:35.990 [2024-12-09 10:16:42.978796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 01:05:35.990 [2024-12-09 10:16:42.978807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.978915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.990 [2024-12-09 10:16:42.978940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:05:35.990 [2024-12-09 10:16:42.978952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:05:35.990 [2024-12-09 10:16:42.978963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.990 [2024-12-09 10:16:42.979086] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:05:35.990 [2024-12-09 10:16:42.979118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:05:35.990 [2024-12-09 10:16:42.979132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:05:35.990 [2024-12-09 10:16:42.979166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:05:35.990 [2024-12-09 10:16:42.979198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:05:35.990 [2024-12-09 10:16:42.979218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:05:35.990 [2024-12-09 10:16:42.979229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:05:35.990 [2024-12-09 10:16:42.979262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:05:35.990 [2024-12-09 10:16:42.979290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:05:35.990 [2024-12-09 10:16:42.979302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:05:35.990 [2024-12-09 10:16:42.979312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:05:35.990 [2024-12-09 10:16:42.979334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:05:35.990 [2024-12-09 10:16:42.979365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:05:35.990 [2024-12-09 10:16:42.979397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:05:35.990 [2024-12-09 10:16:42.979427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:05:35.990 [2024-12-09 10:16:42.979470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:05:35.990 [2024-12-09 10:16:42.979501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:05:35.990 [2024-12-09 10:16:42.979521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:05:35.990 [2024-12-09 10:16:42.979532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:05:35.990 [2024-12-09 10:16:42.979542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:05:35.990 [2024-12-09 10:16:42.979553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:05:35.990 [2024-12-09 10:16:42.979563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:05:35.990 [2024-12-09 10:16:42.979573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:05:35.990 [2024-12-09 10:16:42.979594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:05:35.990 [2024-12-09 10:16:42.979604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979614] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:05:35.990 [2024-12-09 10:16:42.979626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:05:35.990 [2024-12-09 10:16:42.979639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:05:35.990 [2024-12-09 10:16:42.979663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:05:35.990 [2024-12-09 10:16:42.979674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:05:35.990 [2024-12-09 10:16:42.979684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:05:35.990 [2024-12-09 10:16:42.979694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:05:35.990 [2024-12-09 10:16:42.979704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:05:35.990 [2024-12-09 10:16:42.979715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:05:35.990 [2024-12-09 10:16:42.979727] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:05:35.990 [2024-12-09 10:16:42.979742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:05:35.990 [2024-12-09 10:16:42.979761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:05:35.990 [2024-12-09 10:16:42.979773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:05:35.990 [2024-12-09 10:16:42.979784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:05:35.990 [2024-12-09 10:16:42.979795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:05:35.990 [2024-12-09 10:16:42.979806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:05:35.990 [2024-12-09 10:16:42.979829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:05:35.991 [2024-12-09 10:16:42.979840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:05:35.991 [2024-12-09 10:16:42.979852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:05:35.991 [2024-12-09 10:16:42.979863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:05:35.991 [2024-12-09 10:16:42.979874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:05:35.991 [2024-12-09 10:16:42.979885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:05:35.991 [2024-12-09 10:16:42.979896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:05:35.991 [2024-12-09 10:16:42.979910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:05:35.991 [2024-12-09 10:16:42.979921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:05:35.991 [2024-12-09 10:16:42.979932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:05:35.991 [2024-12-09 10:16:42.979944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:05:35.991 [2024-12-09 10:16:42.979956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:05:35.991 [2024-12-09 10:16:42.979967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:05:35.991 [2024-12-09 10:16:42.979979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:05:35.991 [2024-12-09 10:16:42.979990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:05:35.991 [2024-12-09 10:16:42.980002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.991 [2024-12-09 10:16:42.980014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:05:35.991 [2024-12-09 10:16:42.980027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 01:05:35.991 [2024-12-09 10:16:42.980038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.991 [2024-12-09 10:16:43.027767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.991 [2024-12-09 10:16:43.027846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:05:35.991 [2024-12-09 10:16:43.027866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.660 ms 01:05:35.991 [2024-12-09 10:16:43.027884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:35.991 [2024-12-09 10:16:43.028002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:35.991 [2024-12-09 10:16:43.028018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:05:35.991 [2024-12-09 10:16:43.028032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 01:05:35.991 [2024-12-09 10:16:43.028044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.092851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.092924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:05:36.250 [2024-12-09 10:16:43.092974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.713 ms 01:05:36.250 [2024-12-09 10:16:43.092986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.093089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.093107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:05:36.250 [2024-12-09 10:16:43.093142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:05:36.250 [2024-12-09 10:16:43.093154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.093860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.093890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:05:36.250 [2024-12-09 10:16:43.093904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 01:05:36.250 [2024-12-09 10:16:43.093916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.094111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.094131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:05:36.250 [2024-12-09 10:16:43.094166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 01:05:36.250 [2024-12-09 10:16:43.094177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.117362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.117448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:05:36.250 [2024-12-09 10:16:43.117482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.156 ms 01:05:36.250 [2024-12-09 10:16:43.117508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.136882] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:05:36.250 [2024-12-09 10:16:43.136949] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:05:36.250 [2024-12-09 10:16:43.136967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.136980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:05:36.250 [2024-12-09 10:16:43.136993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.254 ms 01:05:36.250 [2024-12-09 10:16:43.137004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.170834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.170927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:05:36.250 [2024-12-09 10:16:43.170944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.768 ms 01:05:36.250 [2024-12-09 10:16:43.170964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.188461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.188546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:05:36.250 [2024-12-09 10:16:43.188564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.432 ms 01:05:36.250 [2024-12-09 10:16:43.188575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.206596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.206638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:05:36.250 [2024-12-09 10:16:43.206670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 01:05:36.250 [2024-12-09 10:16:43.206681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.207589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.207621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:05:36.250 [2024-12-09 10:16:43.207656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 01:05:36.250 [2024-12-09 10:16:43.207668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.250 [2024-12-09 10:16:43.293665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.250 [2024-12-09 10:16:43.293739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:05:36.250 [2024-12-09 10:16:43.293767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.953 ms 01:05:36.250 [2024-12-09 10:16:43.293780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.308333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:05:36.509 [2024-12-09 10:16:43.312530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.312598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:05:36.509 [2024-12-09 10:16:43.312626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.653 ms 01:05:36.509 [2024-12-09 10:16:43.312639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.312768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.312788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:05:36.509 [2024-12-09 10:16:43.312819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 01:05:36.509 [2024-12-09 10:16:43.312831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.313962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.314022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:05:36.509 [2024-12-09 10:16:43.314038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 01:05:36.509 [2024-12-09 10:16:43.314049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.314088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.314104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:05:36.509 [2024-12-09 10:16:43.314117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:05:36.509 [2024-12-09 10:16:43.314129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.314180] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:05:36.509 [2024-12-09 10:16:43.314198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.314210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:05:36.509 [2024-12-09 10:16:43.314221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:05:36.509 [2024-12-09 10:16:43.314232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.350348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.350434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:05:36.509 [2024-12-09 10:16:43.350461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.064 ms 01:05:36.509 [2024-12-09 10:16:43.350474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.350564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:05:36.509 [2024-12-09 10:16:43.350582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:05:36.509 [2024-12-09 10:16:43.350596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 01:05:36.509 [2024-12-09 10:16:43.350610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:05:36.509 [2024-12-09 10:16:43.352011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.159 ms, result 0 01:05:37.897  [2024-12-09T10:16:45.877Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-09T10:16:46.813Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-09T10:16:47.749Z] Copying: 78/1024 [MB] (26 MBps) [2024-12-09T10:16:48.685Z] Copying: 104/1024 [MB] (26 MBps) [2024-12-09T10:16:49.620Z] Copying: 130/1024 [MB] (26 MBps) [2024-12-09T10:16:50.996Z] Copying: 157/1024 [MB] (26 MBps) [2024-12-09T10:16:51.930Z] Copying: 181/1024 [MB] (24 MBps) [2024-12-09T10:16:52.865Z] Copying: 207/1024 [MB] (26 MBps) [2024-12-09T10:16:53.800Z] Copying: 234/1024 [MB] (26 MBps) [2024-12-09T10:16:54.734Z] Copying: 260/1024 [MB] (26 MBps) [2024-12-09T10:16:55.666Z] Copying: 286/1024 [MB] (26 MBps) [2024-12-09T10:16:56.599Z] Copying: 313/1024 [MB] (26 MBps) [2024-12-09T10:16:57.975Z] Copying: 339/1024 [MB] (26 MBps) [2024-12-09T10:16:58.910Z] Copying: 365/1024 [MB] (26 MBps) [2024-12-09T10:16:59.847Z] Copying: 392/1024 [MB] (26 MBps) [2024-12-09T10:17:00.782Z] Copying: 416/1024 [MB] (24 MBps) [2024-12-09T10:17:01.717Z] Copying: 441/1024 [MB] (25 MBps) [2024-12-09T10:17:02.688Z] Copying: 468/1024 [MB] (26 MBps) [2024-12-09T10:17:03.624Z] Copying: 492/1024 [MB] (24 MBps) [2024-12-09T10:17:05.000Z] Copying: 513/1024 [MB] (21 MBps) [2024-12-09T10:17:05.942Z] Copying: 535/1024 [MB] (21 MBps) [2024-12-09T10:17:06.878Z] Copying: 557/1024 [MB] (21 MBps) [2024-12-09T10:17:07.815Z] Copying: 580/1024 [MB] (22 MBps) [2024-12-09T10:17:08.752Z] Copying: 603/1024 [MB] (23 MBps) [2024-12-09T10:17:09.713Z] Copying: 625/1024 [MB] (22 MBps) [2024-12-09T10:17:10.650Z] Copying: 647/1024 [MB] (22 MBps) [2024-12-09T10:17:12.027Z] Copying: 670/1024 [MB] (23 MBps) [2024-12-09T10:17:12.595Z] Copying: 694/1024 [MB] (23 MBps) [2024-12-09T10:17:13.971Z] Copying: 717/1024 [MB] (22 MBps) [2024-12-09T10:17:14.944Z] Copying: 741/1024 [MB] (24 MBps) [2024-12-09T10:17:15.887Z] Copying: 767/1024 [MB] (26 MBps) [2024-12-09T10:17:16.822Z] Copying: 793/1024 [MB] (25 MBps) [2024-12-09T10:17:17.758Z] Copying: 815/1024 [MB] (22 MBps) [2024-12-09T10:17:18.702Z] Copying: 838/1024 [MB] (22 MBps) [2024-12-09T10:17:19.639Z] Copying: 862/1024 [MB] (23 MBps) [2024-12-09T10:17:21.014Z] Copying: 885/1024 [MB] (23 MBps) [2024-12-09T10:17:21.946Z] Copying: 907/1024 [MB] (21 MBps) [2024-12-09T10:17:22.886Z] Copying: 930/1024 [MB] (23 MBps) [2024-12-09T10:17:23.821Z] Copying: 954/1024 [MB] (23 MBps) [2024-12-09T10:17:24.756Z] Copying: 977/1024 [MB] (23 MBps) [2024-12-09T10:17:25.695Z] Copying: 1001/1024 [MB] (24 MBps) [2024-12-09T10:17:25.695Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 10:17:25.677445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.651 [2024-12-09 10:17:25.677541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:06:18.651 [2024-12-09 10:17:25.677578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:06:18.651 [2024-12-09 10:17:25.677594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.651 [2024-12-09 10:17:25.677637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:06:18.651 [2024-12-09 10:17:25.682991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.651 [2024-12-09 10:17:25.683035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:06:18.651 [2024-12-09 10:17:25.683069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.323 ms 01:06:18.651 [2024-12-09 10:17:25.683081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.651 [2024-12-09 10:17:25.683376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.651 [2024-12-09 10:17:25.683406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:06:18.651 [2024-12-09 10:17:25.683422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 01:06:18.651 [2024-12-09 10:17:25.683433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.651 [2024-12-09 10:17:25.687630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.651 [2024-12-09 10:17:25.687683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:06:18.651 [2024-12-09 10:17:25.687698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.174 ms 01:06:18.651 [2024-12-09 10:17:25.687733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.651 [2024-12-09 10:17:25.694704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.651 [2024-12-09 10:17:25.694739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:06:18.651 [2024-12-09 10:17:25.694770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.943 ms 01:06:18.651 [2024-12-09 10:17:25.694782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.910 [2024-12-09 10:17:25.728459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.728508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:06:18.911 [2024-12-09 10:17:25.728525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.590 ms 01:06:18.911 [2024-12-09 10:17:25.728536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.747038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.747091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:06:18.911 [2024-12-09 10:17:25.747114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.470 ms 01:06:18.911 [2024-12-09 10:17:25.747126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.748978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.749037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:06:18.911 [2024-12-09 10:17:25.749070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 01:06:18.911 [2024-12-09 10:17:25.749082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.782447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.782530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:06:18.911 [2024-12-09 10:17:25.782555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.341 ms 01:06:18.911 [2024-12-09 10:17:25.782566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.814189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.814258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:06:18.911 [2024-12-09 10:17:25.814277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.590 ms 01:06:18.911 [2024-12-09 10:17:25.814289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.845044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.845105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:06:18.911 [2024-12-09 10:17:25.845136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.726 ms 01:06:18.911 [2024-12-09 10:17:25.845146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.876193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.911 [2024-12-09 10:17:25.876283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:06:18.911 [2024-12-09 10:17:25.876301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.969 ms 01:06:18.911 [2024-12-09 10:17:25.876313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.911 [2024-12-09 10:17:25.876341] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:06:18.911 [2024-12-09 10:17:25.876371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:06:18.911 [2024-12-09 10:17:25.876392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 01:06:18.911 [2024-12-09 10:17:25.876405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.876998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:06:18.911 [2024-12-09 10:17:25.877177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:06:18.912 [2024-12-09 10:17:25.877611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:06:18.912 [2024-12-09 10:17:25.877622] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c7a4782-9886-43d6-8da8-01df0a702c96 01:06:18.912 [2024-12-09 10:17:25.877634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 01:06:18.912 [2024-12-09 10:17:25.877645] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:06:18.912 [2024-12-09 10:17:25.877655] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:06:18.912 [2024-12-09 10:17:25.877667] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:06:18.912 [2024-12-09 10:17:25.877692] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:06:18.912 [2024-12-09 10:17:25.877705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:06:18.912 [2024-12-09 10:17:25.877716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:06:18.912 [2024-12-09 10:17:25.877727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:06:18.912 [2024-12-09 10:17:25.877737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:06:18.912 [2024-12-09 10:17:25.877749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.912 [2024-12-09 10:17:25.877760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:06:18.912 [2024-12-09 10:17:25.877773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 01:06:18.912 [2024-12-09 10:17:25.877789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.912 [2024-12-09 10:17:25.895493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.912 [2024-12-09 10:17:25.895554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:06:18.912 [2024-12-09 10:17:25.895587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.660 ms 01:06:18.912 [2024-12-09 10:17:25.895599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.912 [2024-12-09 10:17:25.896096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:06:18.912 [2024-12-09 10:17:25.896133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:06:18.912 [2024-12-09 10:17:25.896148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 01:06:18.912 [2024-12-09 10:17:25.896159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.912 [2024-12-09 10:17:25.942620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:18.912 [2024-12-09 10:17:25.942705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:06:18.912 [2024-12-09 10:17:25.942741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:18.912 [2024-12-09 10:17:25.942768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.912 [2024-12-09 10:17:25.942853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:18.912 [2024-12-09 10:17:25.942891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:06:18.912 [2024-12-09 10:17:25.942904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:18.912 [2024-12-09 10:17:25.942915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.912 [2024-12-09 10:17:25.943022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:18.912 [2024-12-09 10:17:25.943059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:06:18.912 [2024-12-09 10:17:25.943072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:18.912 [2024-12-09 10:17:25.943083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:18.912 [2024-12-09 10:17:25.943108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:18.912 [2024-12-09 10:17:25.943122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:06:18.912 [2024-12-09 10:17:25.943142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:18.912 [2024-12-09 10:17:25.943153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.057348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.057417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:06:19.171 [2024-12-09 10:17:26.057437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.057449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.145968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:06:19.171 [2024-12-09 10:17:26.146110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.146233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:06:19.171 [2024-12-09 10:17:26.146293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.146357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:06:19.171 [2024-12-09 10:17:26.146386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.146539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:06:19.171 [2024-12-09 10:17:26.146574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.146635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:06:19.171 [2024-12-09 10:17:26.146668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.146734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:06:19.171 [2024-12-09 10:17:26.146763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.146826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:06:19.171 [2024-12-09 10:17:26.146843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:06:19.171 [2024-12-09 10:17:26.146856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:06:19.171 [2024-12-09 10:17:26.146873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:06:19.171 [2024-12-09 10:17:26.147022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.565 ms, result 0 01:06:20.546 01:06:20.546 01:06:20.546 10:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 01:06:22.447 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 01:06:22.447 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 01:06:22.447 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 01:06:22.447 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:06:22.447 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:06:22.447 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81624 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81624 ']' 01:06:22.706 Process with pid 81624 is not found 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81624 01:06:22.706 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81624) - No such process 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81624 is not found' 01:06:22.706 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 01:06:22.964 Remove shared memory files 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 01:06:22.964 01:06:22.964 real 4m8.422s 01:06:22.964 user 4m44.305s 01:06:22.964 sys 0m41.182s 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:22.964 10:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 01:06:22.964 ************************************ 01:06:22.964 END TEST ftl_dirty_shutdown 01:06:22.964 ************************************ 01:06:22.964 10:17:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 01:06:22.964 10:17:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:06:22.964 10:17:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:22.964 10:17:29 ftl -- common/autotest_common.sh@10 -- # set +x 01:06:22.964 ************************************ 01:06:22.964 START TEST ftl_upgrade_shutdown 01:06:22.964 ************************************ 01:06:22.964 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 01:06:23.224 * Looking for test storage... 01:06:23.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:23.224 --rc genhtml_branch_coverage=1 01:06:23.224 --rc genhtml_function_coverage=1 01:06:23.224 --rc genhtml_legend=1 01:06:23.224 --rc geninfo_all_blocks=1 01:06:23.224 --rc geninfo_unexecuted_blocks=1 01:06:23.224 01:06:23.224 ' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:23.224 --rc genhtml_branch_coverage=1 01:06:23.224 --rc genhtml_function_coverage=1 01:06:23.224 --rc genhtml_legend=1 01:06:23.224 --rc geninfo_all_blocks=1 01:06:23.224 --rc geninfo_unexecuted_blocks=1 01:06:23.224 01:06:23.224 ' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:23.224 --rc genhtml_branch_coverage=1 01:06:23.224 --rc genhtml_function_coverage=1 01:06:23.224 --rc genhtml_legend=1 01:06:23.224 --rc geninfo_all_blocks=1 01:06:23.224 --rc geninfo_unexecuted_blocks=1 01:06:23.224 01:06:23.224 ' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:23.224 --rc genhtml_branch_coverage=1 01:06:23.224 --rc genhtml_function_coverage=1 01:06:23.224 --rc genhtml_legend=1 01:06:23.224 --rc geninfo_all_blocks=1 01:06:23.224 --rc geninfo_unexecuted_blocks=1 01:06:23.224 01:06:23.224 ' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:06:23.224 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84166 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84166 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84166 ']' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:23.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:23.225 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:06:23.484 [2024-12-09 10:17:30.297119] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:06:23.484 [2024-12-09 10:17:30.297329] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84166 ] 01:06:23.484 [2024-12-09 10:17:30.491913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:23.743 [2024-12-09 10:17:30.655638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 01:06:24.680 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:06:24.939 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 01:06:25.198 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:06:25.198 { 01:06:25.198 "name": "basen1", 01:06:25.198 "aliases": [ 01:06:25.198 "82e925a3-8ab4-4e58-9fb4-1c77f0b3b7b5" 01:06:25.198 ], 01:06:25.198 "product_name": "NVMe disk", 01:06:25.198 "block_size": 4096, 01:06:25.198 "num_blocks": 1310720, 01:06:25.198 "uuid": "82e925a3-8ab4-4e58-9fb4-1c77f0b3b7b5", 01:06:25.198 "numa_id": -1, 01:06:25.198 "assigned_rate_limits": { 01:06:25.198 "rw_ios_per_sec": 0, 01:06:25.198 "rw_mbytes_per_sec": 0, 01:06:25.198 "r_mbytes_per_sec": 0, 01:06:25.198 "w_mbytes_per_sec": 0 01:06:25.198 }, 01:06:25.198 "claimed": true, 01:06:25.198 "claim_type": "read_many_write_one", 01:06:25.198 "zoned": false, 01:06:25.198 "supported_io_types": { 01:06:25.198 "read": true, 01:06:25.198 "write": true, 01:06:25.198 "unmap": true, 01:06:25.198 "flush": true, 01:06:25.198 "reset": true, 01:06:25.198 "nvme_admin": true, 01:06:25.198 "nvme_io": true, 01:06:25.198 "nvme_io_md": false, 01:06:25.198 "write_zeroes": true, 01:06:25.198 "zcopy": false, 01:06:25.198 "get_zone_info": false, 01:06:25.198 "zone_management": false, 01:06:25.198 "zone_append": false, 01:06:25.198 "compare": true, 01:06:25.198 "compare_and_write": false, 01:06:25.198 "abort": true, 01:06:25.198 "seek_hole": false, 01:06:25.198 "seek_data": false, 01:06:25.198 "copy": true, 01:06:25.198 "nvme_iov_md": false 01:06:25.198 }, 01:06:25.198 "driver_specific": { 01:06:25.198 "nvme": [ 01:06:25.198 { 01:06:25.198 "pci_address": "0000:00:11.0", 01:06:25.198 "trid": { 01:06:25.198 "trtype": "PCIe", 01:06:25.198 "traddr": "0000:00:11.0" 01:06:25.198 }, 01:06:25.198 "ctrlr_data": { 01:06:25.198 "cntlid": 0, 01:06:25.198 "vendor_id": "0x1b36", 01:06:25.198 "model_number": "QEMU NVMe Ctrl", 01:06:25.198 "serial_number": "12341", 01:06:25.198 "firmware_revision": "8.0.0", 01:06:25.198 "subnqn": "nqn.2019-08.org.qemu:12341", 01:06:25.198 "oacs": { 01:06:25.198 "security": 0, 01:06:25.198 "format": 1, 01:06:25.198 "firmware": 0, 01:06:25.198 "ns_manage": 1 01:06:25.198 }, 01:06:25.198 "multi_ctrlr": false, 01:06:25.198 "ana_reporting": false 01:06:25.198 }, 01:06:25.198 "vs": { 01:06:25.198 "nvme_version": "1.4" 01:06:25.198 }, 01:06:25.198 "ns_data": { 01:06:25.198 "id": 1, 01:06:25.198 "can_share": false 01:06:25.198 } 01:06:25.198 } 01:06:25.198 ], 01:06:25.198 "mp_policy": "active_passive" 01:06:25.198 } 01:06:25.198 } 01:06:25.198 ]' 01:06:25.198 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:06:25.456 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:06:25.714 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=822c94d0-8e9b-417a-9b2a-dbeb634e3244 01:06:25.714 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 01:06:25.714 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 822c94d0-8e9b-417a-9b2a-dbeb634e3244 01:06:25.972 10:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 01:06:26.230 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6589e53e-70cc-4a8a-b860-805c2979ad6c 01:06:26.230 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6589e53e-70cc-4a8a-b860-805c2979ad6c 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f3596d7c-31ce-4914-b245-08b6eca5b526 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f3596d7c-31ce-4914-b245-08b6eca5b526 ]] 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f3596d7c-31ce-4914-b245-08b6eca5b526 5120 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f3596d7c-31ce-4914-b245-08b6eca5b526 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f3596d7c-31ce-4914-b245-08b6eca5b526 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f3596d7c-31ce-4914-b245-08b6eca5b526 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f3596d7c-31ce-4914-b245-08b6eca5b526 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:06:26.796 { 01:06:26.796 "name": "f3596d7c-31ce-4914-b245-08b6eca5b526", 01:06:26.796 "aliases": [ 01:06:26.796 "lvs/basen1p0" 01:06:26.796 ], 01:06:26.796 "product_name": "Logical Volume", 01:06:26.796 "block_size": 4096, 01:06:26.796 "num_blocks": 5242880, 01:06:26.796 "uuid": "f3596d7c-31ce-4914-b245-08b6eca5b526", 01:06:26.796 "assigned_rate_limits": { 01:06:26.796 "rw_ios_per_sec": 0, 01:06:26.796 "rw_mbytes_per_sec": 0, 01:06:26.796 "r_mbytes_per_sec": 0, 01:06:26.796 "w_mbytes_per_sec": 0 01:06:26.796 }, 01:06:26.796 "claimed": false, 01:06:26.796 "zoned": false, 01:06:26.796 "supported_io_types": { 01:06:26.796 "read": true, 01:06:26.796 "write": true, 01:06:26.796 "unmap": true, 01:06:26.796 "flush": false, 01:06:26.796 "reset": true, 01:06:26.796 "nvme_admin": false, 01:06:26.796 "nvme_io": false, 01:06:26.796 "nvme_io_md": false, 01:06:26.796 "write_zeroes": true, 01:06:26.796 "zcopy": false, 01:06:26.796 "get_zone_info": false, 01:06:26.796 "zone_management": false, 01:06:26.796 "zone_append": false, 01:06:26.796 "compare": false, 01:06:26.796 "compare_and_write": false, 01:06:26.796 "abort": false, 01:06:26.796 "seek_hole": true, 01:06:26.796 "seek_data": true, 01:06:26.796 "copy": false, 01:06:26.796 "nvme_iov_md": false 01:06:26.796 }, 01:06:26.796 "driver_specific": { 01:06:26.796 "lvol": { 01:06:26.796 "lvol_store_uuid": "6589e53e-70cc-4a8a-b860-805c2979ad6c", 01:06:26.796 "base_bdev": "basen1", 01:06:26.796 "thin_provision": true, 01:06:26.796 "num_allocated_clusters": 0, 01:06:26.796 "snapshot": false, 01:06:26.796 "clone": false, 01:06:26.796 "esnap_clone": false 01:06:26.796 } 01:06:26.796 } 01:06:26.796 } 01:06:26.796 ]' 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:06:26.796 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:06:27.055 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 01:06:27.055 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 01:06:27.055 10:17:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 01:06:27.055 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 01:06:27.055 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 01:06:27.055 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 01:06:27.313 10:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 01:06:27.313 10:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 01:06:27.313 10:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 01:06:27.571 10:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 01:06:27.571 10:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 01:06:27.571 10:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f3596d7c-31ce-4914-b245-08b6eca5b526 -c cachen1p0 --l2p_dram_limit 2 01:06:27.893 [2024-12-09 10:17:34.747383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.747453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:06:27.893 [2024-12-09 10:17:34.747481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:06:27.893 [2024-12-09 10:17:34.747494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.747585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.747604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:06:27.893 [2024-12-09 10:17:34.747621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 01:06:27.893 [2024-12-09 10:17:34.747634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.747669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:06:27.893 [2024-12-09 10:17:34.748821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:06:27.893 [2024-12-09 10:17:34.748874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.748890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:06:27.893 [2024-12-09 10:17:34.748909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 01:06:27.893 [2024-12-09 10:17:34.748921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.749073] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 010e698c-d7c3-41c5-90c6-8788d7ba5096 01:06:27.893 [2024-12-09 10:17:34.751040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.751231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 01:06:27.893 [2024-12-09 10:17:34.751285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 01:06:27.893 [2024-12-09 10:17:34.751318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.761577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.761774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:06:27.893 [2024-12-09 10:17:34.761901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.156 ms 01:06:27.893 [2024-12-09 10:17:34.761972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.762223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.762316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:06:27.893 [2024-12-09 10:17:34.762482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 01:06:27.893 [2024-12-09 10:17:34.762543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.762661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.762725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:06:27.893 [2024-12-09 10:17:34.762772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 01:06:27.893 [2024-12-09 10:17:34.762919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.763071] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:06:27.893 [2024-12-09 10:17:34.768541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.768720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:06:27.893 [2024-12-09 10:17:34.768889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.478 ms 01:06:27.893 [2024-12-09 10:17:34.769025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.769131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.769260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:06:27.893 [2024-12-09 10:17:34.769390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:06:27.893 [2024-12-09 10:17:34.769518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.769622] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 01:06:27.893 [2024-12-09 10:17:34.769976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:06:27.893 [2024-12-09 10:17:34.770154] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:06:27.893 [2024-12-09 10:17:34.770323] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:06:27.893 [2024-12-09 10:17:34.770499] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:06:27.893 [2024-12-09 10:17:34.770624] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:06:27.893 [2024-12-09 10:17:34.770775] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:06:27.893 [2024-12-09 10:17:34.770879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:06:27.893 [2024-12-09 10:17:34.770943] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:06:27.893 [2024-12-09 10:17:34.771016] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:06:27.893 [2024-12-09 10:17:34.771064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.771101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:06:27.893 [2024-12-09 10:17:34.771140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.447 ms 01:06:27.893 [2024-12-09 10:17:34.771176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.771437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.893 [2024-12-09 10:17:34.771602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:06:27.893 [2024-12-09 10:17:34.771725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 01:06:27.893 [2024-12-09 10:17:34.771750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.893 [2024-12-09 10:17:34.771887] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:06:27.894 [2024-12-09 10:17:34.771909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:06:27.894 [2024-12-09 10:17:34.771925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:06:27.894 [2024-12-09 10:17:34.771937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.771951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:06:27.894 [2024-12-09 10:17:34.771962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.771976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:06:27.894 [2024-12-09 10:17:34.771987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:06:27.894 [2024-12-09 10:17:34.772000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:06:27.894 [2024-12-09 10:17:34.772011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:06:27.894 [2024-12-09 10:17:34.772038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:06:27.894 [2024-12-09 10:17:34.772051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:06:27.894 [2024-12-09 10:17:34.772075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:06:27.894 [2024-12-09 10:17:34.772085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:06:27.894 [2024-12-09 10:17:34.772112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:06:27.894 [2024-12-09 10:17:34.772124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:06:27.894 [2024-12-09 10:17:34.772155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:06:27.894 [2024-12-09 10:17:34.772165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:06:27.894 [2024-12-09 10:17:34.772189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:06:27.894 [2024-12-09 10:17:34.772202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:06:27.894 [2024-12-09 10:17:34.772225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:06:27.894 [2024-12-09 10:17:34.772236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:06:27.894 [2024-12-09 10:17:34.772281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:06:27.894 [2024-12-09 10:17:34.772295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:06:27.894 [2024-12-09 10:17:34.772321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:06:27.894 [2024-12-09 10:17:34.772332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:06:27.894 [2024-12-09 10:17:34.772357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:06:27.894 [2024-12-09 10:17:34.772396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:06:27.894 [2024-12-09 10:17:34.772433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:06:27.894 [2024-12-09 10:17:34.772447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772458] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:06:27.894 [2024-12-09 10:17:34.772472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:06:27.894 [2024-12-09 10:17:34.772484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:06:27.894 [2024-12-09 10:17:34.772510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:06:27.894 [2024-12-09 10:17:34.772525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:06:27.894 [2024-12-09 10:17:34.772537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:06:27.894 [2024-12-09 10:17:34.772550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:06:27.894 [2024-12-09 10:17:34.772561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:06:27.894 [2024-12-09 10:17:34.772574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:06:27.894 [2024-12-09 10:17:34.772588] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:06:27.894 [2024-12-09 10:17:34.772609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:06:27.894 [2024-12-09 10:17:34.772636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:06:27.894 [2024-12-09 10:17:34.772674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:06:27.894 [2024-12-09 10:17:34.772688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:06:27.894 [2024-12-09 10:17:34.772700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:06:27.894 [2024-12-09 10:17:34.772716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:06:27.894 [2024-12-09 10:17:34.772809] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:06:27.894 [2024-12-09 10:17:34.772824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:06:27.894 [2024-12-09 10:17:34.772852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:06:27.894 [2024-12-09 10:17:34.772864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:06:27.894 [2024-12-09 10:17:34.772878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:06:27.894 [2024-12-09 10:17:34.772892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:27.894 [2024-12-09 10:17:34.772907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:06:27.894 [2024-12-09 10:17:34.772919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.078 ms 01:06:27.894 [2024-12-09 10:17:34.772933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:27.894 [2024-12-09 10:17:34.773014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 01:06:27.894 [2024-12-09 10:17:34.773041] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 01:06:30.438 [2024-12-09 10:17:37.297558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.438 [2024-12-09 10:17:37.297639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 01:06:30.438 [2024-12-09 10:17:37.297663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2524.552 ms 01:06:30.438 [2024-12-09 10:17:37.297679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.438 [2024-12-09 10:17:37.339159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.438 [2024-12-09 10:17:37.339302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:06:30.438 [2024-12-09 10:17:37.339328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.193 ms 01:06:30.438 [2024-12-09 10:17:37.339344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.438 [2024-12-09 10:17:37.339484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.438 [2024-12-09 10:17:37.339510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:06:30.438 [2024-12-09 10:17:37.339525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 01:06:30.438 [2024-12-09 10:17:37.339552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.438 [2024-12-09 10:17:37.388725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.438 [2024-12-09 10:17:37.388797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:06:30.438 [2024-12-09 10:17:37.388819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.080 ms 01:06:30.438 [2024-12-09 10:17:37.388835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.388897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.388930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:06:30.439 [2024-12-09 10:17:37.388944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:06:30.439 [2024-12-09 10:17:37.388959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.389697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.389734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:06:30.439 [2024-12-09 10:17:37.389758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 01:06:30.439 [2024-12-09 10:17:37.389773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.389830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.389852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:06:30.439 [2024-12-09 10:17:37.389865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 01:06:30.439 [2024-12-09 10:17:37.389882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.412326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.412396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:06:30.439 [2024-12-09 10:17:37.412418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.416 ms 01:06:30.439 [2024-12-09 10:17:37.412434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.437015] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:06:30.439 [2024-12-09 10:17:37.438509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.438688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:06:30.439 [2024-12-09 10:17:37.438728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.927 ms 01:06:30.439 [2024-12-09 10:17:37.438743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.468787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.469053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 01:06:30.439 [2024-12-09 10:17:37.469094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.975 ms 01:06:30.439 [2024-12-09 10:17:37.469109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.439 [2024-12-09 10:17:37.469222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.439 [2024-12-09 10:17:37.469241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:06:30.439 [2024-12-09 10:17:37.469287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 01:06:30.439 [2024-12-09 10:17:37.469301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.501374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.501428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 01:06:30.697 [2024-12-09 10:17:37.501454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.987 ms 01:06:30.697 [2024-12-09 10:17:37.501471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.534363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.534582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 01:06:30.697 [2024-12-09 10:17:37.534619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.849 ms 01:06:30.697 [2024-12-09 10:17:37.534634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.535492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.535528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:06:30.697 [2024-12-09 10:17:37.535567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 01:06:30.697 [2024-12-09 10:17:37.535579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.627762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.627833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 01:06:30.697 [2024-12-09 10:17:37.627863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.103 ms 01:06:30.697 [2024-12-09 10:17:37.627877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.663592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.663665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 01:06:30.697 [2024-12-09 10:17:37.663722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.574 ms 01:06:30.697 [2024-12-09 10:17:37.663735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.698424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.698500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 01:06:30.697 [2024-12-09 10:17:37.698526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.625 ms 01:06:30.697 [2024-12-09 10:17:37.698539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.733773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.733850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 01:06:30.697 [2024-12-09 10:17:37.733876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.153 ms 01:06:30.697 [2024-12-09 10:17:37.733888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.733975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.733995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:06:30.697 [2024-12-09 10:17:37.734016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:06:30.697 [2024-12-09 10:17:37.734048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.734202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:06:30.697 [2024-12-09 10:17:37.734226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:06:30.697 [2024-12-09 10:17:37.734243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 01:06:30.697 [2024-12-09 10:17:37.734300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:06:30.697 [2024-12-09 10:17:37.735725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2987.782 ms, result 0 01:06:30.697 { 01:06:30.697 "name": "ftl", 01:06:30.697 "uuid": "010e698c-d7c3-41c5-90c6-8788d7ba5096" 01:06:30.697 } 01:06:30.956 10:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 01:06:31.214 [2024-12-09 10:17:38.078694] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:31.214 10:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 01:06:31.472 10:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 01:06:31.730 [2024-12-09 10:17:38.687585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:06:31.730 10:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 01:06:31.989 [2024-12-09 10:17:38.950348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:06:31.989 10:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:06:32.555 Fill FTL, iteration 1 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84290 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84290 /var/tmp/spdk.tgt.sock 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84290 ']' 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 01:06:32.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:32.555 10:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:06:32.555 [2024-12-09 10:17:39.523776] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:06:32.556 [2024-12-09 10:17:39.524150] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84290 ] 01:06:32.813 [2024-12-09 10:17:39.703196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:33.072 [2024-12-09 10:17:39.859333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:34.007 10:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:34.007 10:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:06:34.007 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 01:06:34.266 ftln1 01:06:34.266 10:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 01:06:34.266 10:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84290 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84290 ']' 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84290 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84290 01:06:34.524 killing process with pid 84290 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:34.524 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84290' 01:06:34.525 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84290 01:06:34.525 10:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84290 01:06:37.056 10:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 01:06:37.056 10:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 01:06:37.056 [2024-12-09 10:17:43.794987] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:06:37.056 [2024-12-09 10:17:43.795189] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84358 ] 01:06:37.056 [2024-12-09 10:17:43.986268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:37.314 [2024-12-09 10:17:44.141741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:38.687  [2024-12-09T10:17:46.663Z] Copying: 207/1024 [MB] (207 MBps) [2024-12-09T10:17:48.037Z] Copying: 412/1024 [MB] (205 MBps) [2024-12-09T10:17:48.971Z] Copying: 626/1024 [MB] (214 MBps) [2024-12-09T10:17:49.536Z] Copying: 838/1024 [MB] (212 MBps) [2024-12-09T10:17:50.910Z] Copying: 1024/1024 [MB] (average 210 MBps) 01:06:43.866 01:06:43.866 Calculate MD5 checksum, iteration 1 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:06:43.866 10:17:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:06:43.866 [2024-12-09 10:17:50.670804] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:06:43.866 [2024-12-09 10:17:50.671208] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84431 ] 01:06:43.866 [2024-12-09 10:17:50.845430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:44.124 [2024-12-09 10:17:50.976927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:45.498  [2024-12-09T10:17:53.476Z] Copying: 487/1024 [MB] (487 MBps) [2024-12-09T10:17:53.734Z] Copying: 948/1024 [MB] (461 MBps) [2024-12-09T10:17:54.760Z] Copying: 1024/1024 [MB] (average 463 MBps) 01:06:47.716 01:06:47.716 10:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 01:06:47.716 10:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 01:06:50.244 Fill FTL, iteration 2 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ce5356749ae4b8426faebec67a2e7292 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:06:50.244 10:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 01:06:50.244 [2024-12-09 10:17:56.862096] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:06:50.245 [2024-12-09 10:17:56.862530] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84501 ] 01:06:50.245 [2024-12-09 10:17:57.055495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:50.245 [2024-12-09 10:17:57.219993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:52.147  [2024-12-09T10:17:59.758Z] Copying: 199/1024 [MB] (199 MBps) [2024-12-09T10:18:01.131Z] Copying: 403/1024 [MB] (204 MBps) [2024-12-09T10:18:02.065Z] Copying: 611/1024 [MB] (208 MBps) [2024-12-09T10:18:03.012Z] Copying: 800/1024 [MB] (189 MBps) [2024-12-09T10:18:03.012Z] Copying: 998/1024 [MB] (198 MBps) [2024-12-09T10:18:03.944Z] Copying: 1024/1024 [MB] (average 199 MBps) 01:06:56.901 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 01:06:56.901 Calculate MD5 checksum, iteration 2 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:06:56.901 10:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:06:57.159 [2024-12-09 10:18:04.022553] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:06:57.159 [2024-12-09 10:18:04.023034] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84572 ] 01:06:57.418 [2024-12-09 10:18:04.209344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:57.418 [2024-12-09 10:18:04.338638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:59.359  [2024-12-09T10:18:07.339Z] Copying: 489/1024 [MB] (489 MBps) [2024-12-09T10:18:07.339Z] Copying: 951/1024 [MB] (462 MBps) [2024-12-09T10:18:08.715Z] Copying: 1024/1024 [MB] (average 476 MBps) 01:07:01.671 01:07:01.671 10:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 01:07:01.671 10:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:07:04.205 10:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 01:07:04.205 10:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0c310ab9906746d428bce2ebc0c36a86 01:07:04.205 10:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 01:07:04.205 10:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:07:04.205 10:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:07:04.205 [2024-12-09 10:18:10.985926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:04.205 [2024-12-09 10:18:10.986009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:07:04.205 [2024-12-09 10:18:10.986088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 01:07:04.205 [2024-12-09 10:18:10.986102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:04.205 [2024-12-09 10:18:10.986151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:04.205 [2024-12-09 10:18:10.986194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:07:04.206 [2024-12-09 10:18:10.986207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:07:04.206 [2024-12-09 10:18:10.986219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:04.206 [2024-12-09 10:18:10.986272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:04.206 [2024-12-09 10:18:10.986290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:07:04.206 [2024-12-09 10:18:10.986303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 01:07:04.206 [2024-12-09 10:18:10.986314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:04.206 [2024-12-09 10:18:10.986405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.495 ms, result 0 01:07:04.206 true 01:07:04.206 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:07:04.465 { 01:07:04.465 "name": "ftl", 01:07:04.465 "properties": [ 01:07:04.465 { 01:07:04.465 "name": "superblock_version", 01:07:04.465 "value": 5, 01:07:04.465 "read-only": true 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "name": "base_device", 01:07:04.465 "bands": [ 01:07:04.465 { 01:07:04.465 "id": 0, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 1, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 2, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 3, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 4, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 5, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 6, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 7, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 8, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 9, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 10, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 11, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 12, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 13, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 14, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 15, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 16, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 17, 01:07:04.465 "state": "FREE", 01:07:04.465 "validity": 0.0 01:07:04.465 } 01:07:04.465 ], 01:07:04.465 "read-only": true 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "name": "cache_device", 01:07:04.465 "type": "bdev", 01:07:04.465 "chunks": [ 01:07:04.465 { 01:07:04.465 "id": 0, 01:07:04.465 "state": "INACTIVE", 01:07:04.465 "utilization": 0.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 1, 01:07:04.465 "state": "CLOSED", 01:07:04.465 "utilization": 1.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 2, 01:07:04.465 "state": "CLOSED", 01:07:04.465 "utilization": 1.0 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 3, 01:07:04.465 "state": "OPEN", 01:07:04.465 "utilization": 0.001953125 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "id": 4, 01:07:04.465 "state": "OPEN", 01:07:04.465 "utilization": 0.0 01:07:04.465 } 01:07:04.465 ], 01:07:04.465 "read-only": true 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "name": "verbose_mode", 01:07:04.465 "value": true, 01:07:04.465 "unit": "", 01:07:04.465 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:07:04.465 }, 01:07:04.465 { 01:07:04.465 "name": "prep_upgrade_on_shutdown", 01:07:04.465 "value": false, 01:07:04.465 "unit": "", 01:07:04.465 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:07:04.465 } 01:07:04.465 ] 01:07:04.465 } 01:07:04.465 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 01:07:04.724 [2024-12-09 10:18:11.606748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:04.724 [2024-12-09 10:18:11.607067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:07:04.724 [2024-12-09 10:18:11.607193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:07:04.724 [2024-12-09 10:18:11.607244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:04.724 [2024-12-09 10:18:11.607419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:04.724 [2024-12-09 10:18:11.607475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:07:04.724 [2024-12-09 10:18:11.607513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:07:04.724 [2024-12-09 10:18:11.607640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:04.724 [2024-12-09 10:18:11.607769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:04.724 [2024-12-09 10:18:11.607821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:07:04.724 [2024-12-09 10:18:11.607869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:07:04.724 [2024-12-09 10:18:11.607963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:04.724 [2024-12-09 10:18:11.608092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.332 ms, result 0 01:07:04.724 true 01:07:04.724 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 01:07:04.725 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:07:04.725 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 01:07:04.983 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 01:07:04.983 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 01:07:04.983 10:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:07:05.242 [2024-12-09 10:18:12.199453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:05.242 [2024-12-09 10:18:12.199522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:07:05.242 [2024-12-09 10:18:12.199544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 01:07:05.242 [2024-12-09 10:18:12.199564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:05.242 [2024-12-09 10:18:12.199602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:05.242 [2024-12-09 10:18:12.199617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:07:05.242 [2024-12-09 10:18:12.199630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:07:05.242 [2024-12-09 10:18:12.199641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:05.242 [2024-12-09 10:18:12.199668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:05.242 [2024-12-09 10:18:12.199682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:07:05.242 [2024-12-09 10:18:12.199694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:07:05.242 [2024-12-09 10:18:12.199705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:05.242 [2024-12-09 10:18:12.199782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.321 ms, result 0 01:07:05.242 true 01:07:05.242 10:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:07:05.500 { 01:07:05.500 "name": "ftl", 01:07:05.500 "properties": [ 01:07:05.500 { 01:07:05.500 "name": "superblock_version", 01:07:05.500 "value": 5, 01:07:05.500 "read-only": true 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "name": "base_device", 01:07:05.500 "bands": [ 01:07:05.500 { 01:07:05.500 "id": 0, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 1, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 2, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 3, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 4, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 5, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 6, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 7, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 8, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 9, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 10, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.500 { 01:07:05.500 "id": 11, 01:07:05.500 "state": "FREE", 01:07:05.500 "validity": 0.0 01:07:05.500 }, 01:07:05.501 { 01:07:05.501 "id": 12, 01:07:05.501 "state": "FREE", 01:07:05.501 "validity": 0.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 13, 01:07:05.501 "state": "FREE", 01:07:05.501 "validity": 0.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 14, 01:07:05.501 "state": "FREE", 01:07:05.501 "validity": 0.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 15, 01:07:05.501 "state": "FREE", 01:07:05.501 "validity": 0.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 16, 01:07:05.501 "state": "FREE", 01:07:05.501 "validity": 0.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 17, 01:07:05.501 "state": "FREE", 01:07:05.501 "validity": 0.0 01:07:05.501 } 01:07:05.501 ], 01:07:05.501 "read-only": true 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "name": "cache_device", 01:07:05.501 "type": "bdev", 01:07:05.501 "chunks": [ 01:07:05.501 { 01:07:05.501 "id": 0, 01:07:05.501 "state": "INACTIVE", 01:07:05.501 "utilization": 0.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 1, 01:07:05.501 "state": "CLOSED", 01:07:05.501 "utilization": 1.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 2, 01:07:05.501 "state": "CLOSED", 01:07:05.501 "utilization": 1.0 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 3, 01:07:05.501 "state": "OPEN", 01:07:05.501 "utilization": 0.001953125 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "id": 4, 01:07:05.501 "state": "OPEN", 01:07:05.501 "utilization": 0.0 01:07:05.501 } 01:07:05.501 ], 01:07:05.501 "read-only": true 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "name": "verbose_mode", 01:07:05.501 "value": true, 01:07:05.501 "unit": "", 01:07:05.501 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:07:05.501 }, 01:07:05.501 { 01:07:05.501 "name": "prep_upgrade_on_shutdown", 01:07:05.501 "value": true, 01:07:05.501 "unit": "", 01:07:05.501 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:07:05.501 } 01:07:05.501 ] 01:07:05.501 } 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84166 ]] 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84166 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84166 ']' 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84166 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:05.501 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84166 01:07:05.760 killing process with pid 84166 01:07:05.760 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:07:05.760 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:07:05.760 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84166' 01:07:05.760 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84166 01:07:05.760 10:18:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84166 01:07:06.702 [2024-12-09 10:18:13.607575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 01:07:06.702 [2024-12-09 10:18:13.623968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:06.702 [2024-12-09 10:18:13.624028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 01:07:06.702 [2024-12-09 10:18:13.624050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:07:06.702 [2024-12-09 10:18:13.624062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:06.702 [2024-12-09 10:18:13.624095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 01:07:06.702 [2024-12-09 10:18:13.627863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:06.702 [2024-12-09 10:18:13.627899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 01:07:06.702 [2024-12-09 10:18:13.627915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.746 ms 01:07:06.702 [2024-12-09 10:18:13.627933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.401460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.401539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 01:07:16.729 [2024-12-09 10:18:22.401567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8773.526 ms 01:07:16.729 [2024-12-09 10:18:22.401579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.403092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.403170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 01:07:16.729 [2024-12-09 10:18:22.403200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 01:07:16.729 [2024-12-09 10:18:22.403212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.404519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.404542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 01:07:16.729 [2024-12-09 10:18:22.404556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 01:07:16.729 [2024-12-09 10:18:22.404574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.417815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.417854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 01:07:16.729 [2024-12-09 10:18:22.417886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.197 ms 01:07:16.729 [2024-12-09 10:18:22.417898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.425992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.426036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 01:07:16.729 [2024-12-09 10:18:22.426053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.053 ms 01:07:16.729 [2024-12-09 10:18:22.426074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.426182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.426210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 01:07:16.729 [2024-12-09 10:18:22.426238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 01:07:16.729 [2024-12-09 10:18:22.426270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.438976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.439032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 01:07:16.729 [2024-12-09 10:18:22.439065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.679 ms 01:07:16.729 [2024-12-09 10:18:22.439077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.451638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.451676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 01:07:16.729 [2024-12-09 10:18:22.451707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.521 ms 01:07:16.729 [2024-12-09 10:18:22.451717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.464464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.464502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 01:07:16.729 [2024-12-09 10:18:22.464516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.708 ms 01:07:16.729 [2024-12-09 10:18:22.464526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.476960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.729 [2024-12-09 10:18:22.477011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 01:07:16.729 [2024-12-09 10:18:22.477042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.357 ms 01:07:16.729 [2024-12-09 10:18:22.477052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.729 [2024-12-09 10:18:22.477089] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 01:07:16.729 [2024-12-09 10:18:22.477126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:07:16.729 [2024-12-09 10:18:22.477140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 01:07:16.729 [2024-12-09 10:18:22.477152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 01:07:16.729 [2024-12-09 10:18:22.477163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:07:16.729 [2024-12-09 10:18:22.477295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:07:16.730 [2024-12-09 10:18:22.477393] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 01:07:16.730 [2024-12-09 10:18:22.477420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 010e698c-d7c3-41c5-90c6-8788d7ba5096 01:07:16.730 [2024-12-09 10:18:22.477441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 01:07:16.730 [2024-12-09 10:18:22.477457] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 01:07:16.730 [2024-12-09 10:18:22.477484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 01:07:16.730 [2024-12-09 10:18:22.477513] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 01:07:16.730 [2024-12-09 10:18:22.477530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 01:07:16.730 [2024-12-09 10:18:22.477541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 01:07:16.730 [2024-12-09 10:18:22.477555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 01:07:16.730 [2024-12-09 10:18:22.477565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 01:07:16.730 [2024-12-09 10:18:22.477575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 01:07:16.730 [2024-12-09 10:18:22.477586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.730 [2024-12-09 10:18:22.477597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 01:07:16.730 [2024-12-09 10:18:22.477609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 01:07:16.730 [2024-12-09 10:18:22.477620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.495301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.730 [2024-12-09 10:18:22.495557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 01:07:16.730 [2024-12-09 10:18:22.495601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.656 ms 01:07:16.730 [2024-12-09 10:18:22.495615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.496191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:16.730 [2024-12-09 10:18:22.496215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 01:07:16.730 [2024-12-09 10:18:22.496237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 01:07:16.730 [2024-12-09 10:18:22.496260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.556445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.556527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:07:16.730 [2024-12-09 10:18:22.556562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.556583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.556681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.556697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:07:16.730 [2024-12-09 10:18:22.556724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.556736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.556866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.556887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:07:16.730 [2024-12-09 10:18:22.556906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.556918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.556943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.556957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:07:16.730 [2024-12-09 10:18:22.556969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.556980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.675853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.675987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:07:16.730 [2024-12-09 10:18:22.676030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.676043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.771605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.771675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:07:16.730 [2024-12-09 10:18:22.771693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.771705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.771856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.771874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:07:16.730 [2024-12-09 10:18:22.771886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.771897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.771978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.771994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:07:16.730 [2024-12-09 10:18:22.772006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.772017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.772186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.772205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:07:16.730 [2024-12-09 10:18:22.772217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.772229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.772316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.772358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 01:07:16.730 [2024-12-09 10:18:22.772399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.772414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.772464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.772481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:07:16.730 [2024-12-09 10:18:22.772504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.772516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.772578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:16.730 [2024-12-09 10:18:22.772597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:07:16.730 [2024-12-09 10:18:22.772610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:16.730 [2024-12-09 10:18:22.772621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:16.730 [2024-12-09 10:18:22.772776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9148.815 ms, result 0 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84785 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84785 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84785 ']' 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:20.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:20.018 10:18:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:07:20.018 [2024-12-09 10:18:26.510880] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:20.018 [2024-12-09 10:18:26.511336] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84785 ] 01:07:20.018 [2024-12-09 10:18:26.683924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:20.018 [2024-12-09 10:18:26.811754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:07:20.954 [2024-12-09 10:18:27.781287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:07:20.954 [2024-12-09 10:18:27.781383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:07:20.954 [2024-12-09 10:18:27.931084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.931385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:07:20.954 [2024-12-09 10:18:27.931418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:07:20.954 [2024-12-09 10:18:27.931432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.931532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.931552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:07:20.954 [2024-12-09 10:18:27.931566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 01:07:20.954 [2024-12-09 10:18:27.931578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.931623] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:07:20.954 [2024-12-09 10:18:27.932571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:07:20.954 [2024-12-09 10:18:27.932615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.932630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:07:20.954 [2024-12-09 10:18:27.932643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.008 ms 01:07:20.954 [2024-12-09 10:18:27.932654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.934561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 01:07:20.954 [2024-12-09 10:18:27.953088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.953160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 01:07:20.954 [2024-12-09 10:18:27.953197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.527 ms 01:07:20.954 [2024-12-09 10:18:27.953219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.953330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.953353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 01:07:20.954 [2024-12-09 10:18:27.953367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 01:07:20.954 [2024-12-09 10:18:27.953379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.962710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.962973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:07:20.954 [2024-12-09 10:18:27.963004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.209 ms 01:07:20.954 [2024-12-09 10:18:27.963017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.963126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.963146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:07:20.954 [2024-12-09 10:18:27.963160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 01:07:20.954 [2024-12-09 10:18:27.963171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.963271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.963298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:07:20.954 [2024-12-09 10:18:27.963311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 01:07:20.954 [2024-12-09 10:18:27.963323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.963367] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:07:20.954 [2024-12-09 10:18:27.968677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.968716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:07:20.954 [2024-12-09 10:18:27.968732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.320 ms 01:07:20.954 [2024-12-09 10:18:27.968749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.968825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.954 [2024-12-09 10:18:27.968843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:07:20.954 [2024-12-09 10:18:27.968856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:07:20.954 [2024-12-09 10:18:27.968868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.954 [2024-12-09 10:18:27.968923] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 01:07:20.954 [2024-12-09 10:18:27.968963] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 01:07:20.954 [2024-12-09 10:18:27.969007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 01:07:20.954 [2024-12-09 10:18:27.969028] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 01:07:20.954 [2024-12-09 10:18:27.969148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:07:20.954 [2024-12-09 10:18:27.969164] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:07:20.954 [2024-12-09 10:18:27.969179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:07:20.954 [2024-12-09 10:18:27.969195] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:07:20.954 [2024-12-09 10:18:27.969208] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:07:20.954 [2024-12-09 10:18:27.969225] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:07:20.954 [2024-12-09 10:18:27.969237] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:07:20.954 [2024-12-09 10:18:27.969269] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:07:20.954 [2024-12-09 10:18:27.969285] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:07:20.955 [2024-12-09 10:18:27.969298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.955 [2024-12-09 10:18:27.969309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:07:20.955 [2024-12-09 10:18:27.969321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 01:07:20.955 [2024-12-09 10:18:27.969333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.955 [2024-12-09 10:18:27.969435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.955 [2024-12-09 10:18:27.969451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:07:20.955 [2024-12-09 10:18:27.969468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 01:07:20.955 [2024-12-09 10:18:27.969479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.955 [2024-12-09 10:18:27.969598] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:07:20.955 [2024-12-09 10:18:27.969616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:07:20.955 [2024-12-09 10:18:27.969628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:07:20.955 [2024-12-09 10:18:27.969640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:07:20.955 [2024-12-09 10:18:27.969662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:07:20.955 [2024-12-09 10:18:27.969684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:07:20.955 [2024-12-09 10:18:27.969695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:07:20.955 [2024-12-09 10:18:27.969706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:07:20.955 [2024-12-09 10:18:27.969727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:07:20.955 [2024-12-09 10:18:27.969737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:07:20.955 [2024-12-09 10:18:27.969758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:07:20.955 [2024-12-09 10:18:27.969769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:07:20.955 [2024-12-09 10:18:27.969790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:07:20.955 [2024-12-09 10:18:27.969800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:07:20.955 [2024-12-09 10:18:27.969821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:07:20.955 [2024-12-09 10:18:27.969832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:20.955 [2024-12-09 10:18:27.969842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:07:20.955 [2024-12-09 10:18:27.969866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:07:20.955 [2024-12-09 10:18:27.969878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:20.955 [2024-12-09 10:18:27.969889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:07:20.955 [2024-12-09 10:18:27.969900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:07:20.955 [2024-12-09 10:18:27.969910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:20.955 [2024-12-09 10:18:27.969921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:07:20.955 [2024-12-09 10:18:27.969931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:07:20.955 [2024-12-09 10:18:27.969942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:20.955 [2024-12-09 10:18:27.969953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:07:20.955 [2024-12-09 10:18:27.969963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:07:20.955 [2024-12-09 10:18:27.969973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.969984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:07:20.955 [2024-12-09 10:18:27.969994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:07:20.955 [2024-12-09 10:18:27.970005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.970016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:07:20.955 [2024-12-09 10:18:27.970026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:07:20.955 [2024-12-09 10:18:27.970036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.970047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:07:20.955 [2024-12-09 10:18:27.970058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:07:20.955 [2024-12-09 10:18:27.970080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.970092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:07:20.955 [2024-12-09 10:18:27.970104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:07:20.955 [2024-12-09 10:18:27.970116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:07:20.955 [2024-12-09 10:18:27.970128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:20.955 [2024-12-09 10:18:27.970145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:07:20.955 [2024-12-09 10:18:27.970157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:07:20.955 [2024-12-09 10:18:27.970168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:07:20.955 [2024-12-09 10:18:27.970178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:07:20.955 [2024-12-09 10:18:27.970189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:07:20.955 [2024-12-09 10:18:27.970200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:07:20.955 [2024-12-09 10:18:27.970212] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:07:20.955 [2024-12-09 10:18:27.970226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.970239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:07:20.955 [2024-12-09 10:18:27.970569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.970650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.970706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:07:20.955 [2024-12-09 10:18:27.970842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:07:20.955 [2024-12-09 10:18:27.970902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:07:20.955 [2024-12-09 10:18:27.971019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:07:20.955 [2024-12-09 10:18:27.971084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:07:20.955 [2024-12-09 10:18:27.971804] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:07:20.955 [2024-12-09 10:18:27.971866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.971987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:07:20.955 [2024-12-09 10:18:27.972040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:07:20.955 [2024-12-09 10:18:27.972093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:07:20.955 [2024-12-09 10:18:27.972232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:07:20.955 [2024-12-09 10:18:27.972328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:20.955 [2024-12-09 10:18:27.972425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:07:20.955 [2024-12-09 10:18:27.972443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.797 ms 01:07:20.955 [2024-12-09 10:18:27.972456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:20.955 [2024-12-09 10:18:27.972534] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 01:07:20.955 [2024-12-09 10:18:27.972555] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 01:07:23.484 [2024-12-09 10:18:30.330240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.330335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 01:07:23.484 [2024-12-09 10:18:30.330358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2357.715 ms 01:07:23.484 [2024-12-09 10:18:30.330371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.369686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.369760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:07:23.484 [2024-12-09 10:18:30.369782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.016 ms 01:07:23.484 [2024-12-09 10:18:30.369794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.369946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.369973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:07:23.484 [2024-12-09 10:18:30.370002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 01:07:23.484 [2024-12-09 10:18:30.370014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.415942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.416221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:07:23.484 [2024-12-09 10:18:30.416281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.823 ms 01:07:23.484 [2024-12-09 10:18:30.416296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.416385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.416402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:07:23.484 [2024-12-09 10:18:30.416416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:07:23.484 [2024-12-09 10:18:30.416428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.417052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.417071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:07:23.484 [2024-12-09 10:18:30.417085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 01:07:23.484 [2024-12-09 10:18:30.417097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.417165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.417180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:07:23.484 [2024-12-09 10:18:30.417192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 01:07:23.484 [2024-12-09 10:18:30.417204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.438274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.438574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:07:23.484 [2024-12-09 10:18:30.438607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.035 ms 01:07:23.484 [2024-12-09 10:18:30.438621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.466106] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 01:07:23.484 [2024-12-09 10:18:30.466182] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 01:07:23.484 [2024-12-09 10:18:30.466206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.466219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 01:07:23.484 [2024-12-09 10:18:30.466237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.379 ms 01:07:23.484 [2024-12-09 10:18:30.466267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.484477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.484745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 01:07:23.484 [2024-12-09 10:18:30.484777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.136 ms 01:07:23.484 [2024-12-09 10:18:30.484791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.500492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.500571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 01:07:23.484 [2024-12-09 10:18:30.500592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.614 ms 01:07:23.484 [2024-12-09 10:18:30.500604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.516086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.516147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 01:07:23.484 [2024-12-09 10:18:30.516168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.417 ms 01:07:23.484 [2024-12-09 10:18:30.516179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.484 [2024-12-09 10:18:30.517174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.484 [2024-12-09 10:18:30.517211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:07:23.484 [2024-12-09 10:18:30.517227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.799 ms 01:07:23.484 [2024-12-09 10:18:30.517239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.596142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.596225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 01:07:23.743 [2024-12-09 10:18:30.596264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.850 ms 01:07:23.743 [2024-12-09 10:18:30.596279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.610954] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:07:23.743 [2024-12-09 10:18:30.612437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.612473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:07:23.743 [2024-12-09 10:18:30.612493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.067 ms 01:07:23.743 [2024-12-09 10:18:30.612506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.612677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.612703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 01:07:23.743 [2024-12-09 10:18:30.612717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:07:23.743 [2024-12-09 10:18:30.612728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.612818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.612838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:07:23.743 [2024-12-09 10:18:30.612851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 01:07:23.743 [2024-12-09 10:18:30.612863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.612900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.612915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:07:23.743 [2024-12-09 10:18:30.612935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:07:23.743 [2024-12-09 10:18:30.612961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.613006] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 01:07:23.743 [2024-12-09 10:18:30.613023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.613035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 01:07:23.743 [2024-12-09 10:18:30.613047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 01:07:23.743 [2024-12-09 10:18:30.613058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.743 [2024-12-09 10:18:30.645211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.743 [2024-12-09 10:18:30.645314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 01:07:23.744 [2024-12-09 10:18:30.645336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.114 ms 01:07:23.744 [2024-12-09 10:18:30.645349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.744 [2024-12-09 10:18:30.645467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:23.744 [2024-12-09 10:18:30.645487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:07:23.744 [2024-12-09 10:18:30.645502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 01:07:23.744 [2024-12-09 10:18:30.645514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:23.744 [2024-12-09 10:18:30.646977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2715.385 ms, result 0 01:07:23.744 [2024-12-09 10:18:30.661731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:23.744 [2024-12-09 10:18:30.677803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:07:23.744 [2024-12-09 10:18:30.687771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:07:24.310 10:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:24.310 10:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:07:24.310 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:07:24.310 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 01:07:24.310 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:07:24.569 [2024-12-09 10:18:31.384482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:24.569 [2024-12-09 10:18:31.384552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:07:24.569 [2024-12-09 10:18:31.384578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 01:07:24.569 [2024-12-09 10:18:31.384590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:24.569 [2024-12-09 10:18:31.384668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:24.569 [2024-12-09 10:18:31.384686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:07:24.569 [2024-12-09 10:18:31.384699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:07:24.569 [2024-12-09 10:18:31.384711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:24.569 [2024-12-09 10:18:31.384740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:24.569 [2024-12-09 10:18:31.384755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:07:24.569 [2024-12-09 10:18:31.384767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:07:24.569 [2024-12-09 10:18:31.384778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:24.569 [2024-12-09 10:18:31.384870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.391 ms, result 0 01:07:24.569 true 01:07:24.569 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:07:24.828 { 01:07:24.828 "name": "ftl", 01:07:24.828 "properties": [ 01:07:24.828 { 01:07:24.828 "name": "superblock_version", 01:07:24.828 "value": 5, 01:07:24.828 "read-only": true 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "name": "base_device", 01:07:24.828 "bands": [ 01:07:24.828 { 01:07:24.828 "id": 0, 01:07:24.828 "state": "CLOSED", 01:07:24.828 "validity": 1.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 1, 01:07:24.828 "state": "CLOSED", 01:07:24.828 "validity": 1.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 2, 01:07:24.828 "state": "CLOSED", 01:07:24.828 "validity": 0.007843137254901933 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 3, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 4, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 5, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 6, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 7, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 8, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 9, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 10, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 11, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 12, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 13, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 14, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 15, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 16, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 17, 01:07:24.828 "state": "FREE", 01:07:24.828 "validity": 0.0 01:07:24.828 } 01:07:24.828 ], 01:07:24.828 "read-only": true 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "name": "cache_device", 01:07:24.828 "type": "bdev", 01:07:24.828 "chunks": [ 01:07:24.828 { 01:07:24.828 "id": 0, 01:07:24.828 "state": "INACTIVE", 01:07:24.828 "utilization": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 1, 01:07:24.828 "state": "OPEN", 01:07:24.828 "utilization": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 2, 01:07:24.828 "state": "OPEN", 01:07:24.828 "utilization": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 3, 01:07:24.828 "state": "FREE", 01:07:24.828 "utilization": 0.0 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "id": 4, 01:07:24.828 "state": "FREE", 01:07:24.828 "utilization": 0.0 01:07:24.828 } 01:07:24.828 ], 01:07:24.828 "read-only": true 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "name": "verbose_mode", 01:07:24.828 "value": true, 01:07:24.828 "unit": "", 01:07:24.828 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:07:24.828 }, 01:07:24.828 { 01:07:24.828 "name": "prep_upgrade_on_shutdown", 01:07:24.828 "value": false, 01:07:24.828 "unit": "", 01:07:24.828 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:07:24.828 } 01:07:24.828 ] 01:07:24.828 } 01:07:24.828 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 01:07:24.828 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 01:07:24.828 10:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:07:25.086 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 01:07:25.086 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 01:07:25.086 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 01:07:25.086 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:07:25.086 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 01:07:25.345 Validate MD5 checksum, iteration 1 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:07:25.345 10:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:07:25.603 [2024-12-09 10:18:32.438803] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:25.603 [2024-12-09 10:18:32.439219] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84866 ] 01:07:25.603 [2024-12-09 10:18:32.612743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:25.862 [2024-12-09 10:18:32.747174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:27.829  [2024-12-09T10:18:35.439Z] Copying: 527/1024 [MB] (527 MBps) [2024-12-09T10:18:37.342Z] Copying: 1024/1024 [MB] (average 516 MBps) 01:07:30.298 01:07:30.298 10:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 01:07:30.298 10:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ce5356749ae4b8426faebec67a2e7292 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ce5356749ae4b8426faebec67a2e7292 != \c\e\5\3\5\6\7\4\9\a\e\4\b\8\4\2\6\f\a\e\b\e\c\6\7\a\2\e\7\2\9\2 ]] 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 01:07:32.200 Validate MD5 checksum, iteration 2 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:07:32.200 10:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:07:32.200 [2024-12-09 10:18:39.231669] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:32.200 [2024-12-09 10:18:39.231833] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84934 ] 01:07:32.459 [2024-12-09 10:18:39.422206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:32.717 [2024-12-09 10:18:39.579518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:34.620  [2024-12-09T10:18:42.600Z] Copying: 491/1024 [MB] (491 MBps) [2024-12-09T10:18:42.600Z] Copying: 968/1024 [MB] (477 MBps) [2024-12-09T10:18:43.976Z] Copying: 1024/1024 [MB] (average 484 MBps) 01:07:36.932 01:07:36.932 10:18:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 01:07:36.932 10:18:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0c310ab9906746d428bce2ebc0c36a86 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0c310ab9906746d428bce2ebc0c36a86 != \0\c\3\1\0\a\b\9\9\0\6\7\4\6\d\4\2\8\b\c\e\2\e\b\c\0\c\3\6\a\8\6 ]] 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84785 ]] 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84785 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85003 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85003 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85003 ']' 01:07:39.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:39.461 10:18:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:07:39.461 [2024-12-09 10:18:46.098877] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:39.461 [2024-12-09 10:18:46.099271] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85003 ] 01:07:39.461 [2024-12-09 10:18:46.282857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:39.461 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84785 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 01:07:39.461 [2024-12-09 10:18:46.434106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:07:40.878 [2024-12-09 10:18:47.525313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:07:40.878 [2024-12-09 10:18:47.525453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:07:40.878 [2024-12-09 10:18:47.675850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.675907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:07:40.878 [2024-12-09 10:18:47.675930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:07:40.878 [2024-12-09 10:18:47.675945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.676026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.676049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:07:40.878 [2024-12-09 10:18:47.676064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 01:07:40.878 [2024-12-09 10:18:47.676078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.676123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:07:40.878 [2024-12-09 10:18:47.677042] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:07:40.878 [2024-12-09 10:18:47.677089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.677107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:07:40.878 [2024-12-09 10:18:47.677121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.982 ms 01:07:40.878 [2024-12-09 10:18:47.677135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.677646] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 01:07:40.878 [2024-12-09 10:18:47.701271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.701349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 01:07:40.878 [2024-12-09 10:18:47.701372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.627 ms 01:07:40.878 [2024-12-09 10:18:47.701386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.714666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.714897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 01:07:40.878 [2024-12-09 10:18:47.714927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 01:07:40.878 [2024-12-09 10:18:47.714942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.715474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.715502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:07:40.878 [2024-12-09 10:18:47.715520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.410 ms 01:07:40.878 [2024-12-09 10:18:47.715534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.715615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.715636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:07:40.878 [2024-12-09 10:18:47.715650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 01:07:40.878 [2024-12-09 10:18:47.715663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.715706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.715724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:07:40.878 [2024-12-09 10:18:47.715739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 01:07:40.878 [2024-12-09 10:18:47.715752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.715789] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:07:40.878 [2024-12-09 10:18:47.719675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.719716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:07:40.878 [2024-12-09 10:18:47.719733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.894 ms 01:07:40.878 [2024-12-09 10:18:47.719748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.719793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.719811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:07:40.878 [2024-12-09 10:18:47.719826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:07:40.878 [2024-12-09 10:18:47.719870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.719923] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 01:07:40.878 [2024-12-09 10:18:47.719958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 01:07:40.878 [2024-12-09 10:18:47.720001] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 01:07:40.878 [2024-12-09 10:18:47.720028] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 01:07:40.878 [2024-12-09 10:18:47.720141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:07:40.878 [2024-12-09 10:18:47.720159] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:07:40.878 [2024-12-09 10:18:47.720175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:07:40.878 [2024-12-09 10:18:47.720192] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:07:40.878 [2024-12-09 10:18:47.720208] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:07:40.878 [2024-12-09 10:18:47.720223] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:07:40.878 [2024-12-09 10:18:47.720236] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:07:40.878 [2024-12-09 10:18:47.720248] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:07:40.878 [2024-12-09 10:18:47.720261] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:07:40.878 [2024-12-09 10:18:47.720304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.720322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:07:40.878 [2024-12-09 10:18:47.720336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.383 ms 01:07:40.878 [2024-12-09 10:18:47.720350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.720453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.878 [2024-12-09 10:18:47.720470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:07:40.878 [2024-12-09 10:18:47.720484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 01:07:40.878 [2024-12-09 10:18:47.720496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.878 [2024-12-09 10:18:47.720611] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:07:40.878 [2024-12-09 10:18:47.720635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:07:40.878 [2024-12-09 10:18:47.720648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:07:40.878 [2024-12-09 10:18:47.720662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.878 [2024-12-09 10:18:47.720675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:07:40.878 [2024-12-09 10:18:47.720687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:07:40.878 [2024-12-09 10:18:47.720700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:07:40.878 [2024-12-09 10:18:47.720712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:07:40.878 [2024-12-09 10:18:47.720724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:07:40.878 [2024-12-09 10:18:47.720736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.878 [2024-12-09 10:18:47.720748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:07:40.878 [2024-12-09 10:18:47.720760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:07:40.878 [2024-12-09 10:18:47.720772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.720784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:07:40.879 [2024-12-09 10:18:47.720796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:07:40.879 [2024-12-09 10:18:47.720808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.720822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:07:40.879 [2024-12-09 10:18:47.720835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:07:40.879 [2024-12-09 10:18:47.720848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.720860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:07:40.879 [2024-12-09 10:18:47.720873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:07:40.879 [2024-12-09 10:18:47.720898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:40.879 [2024-12-09 10:18:47.720911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:07:40.879 [2024-12-09 10:18:47.720923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:07:40.879 [2024-12-09 10:18:47.720936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:40.879 [2024-12-09 10:18:47.720948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:07:40.879 [2024-12-09 10:18:47.720960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:07:40.879 [2024-12-09 10:18:47.720979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:40.879 [2024-12-09 10:18:47.720991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:07:40.879 [2024-12-09 10:18:47.721003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:07:40.879 [2024-12-09 10:18:47.721015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:07:40.879 [2024-12-09 10:18:47.721027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:07:40.879 [2024-12-09 10:18:47.721040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:07:40.879 [2024-12-09 10:18:47.721052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.721064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:07:40.879 [2024-12-09 10:18:47.721076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:07:40.879 [2024-12-09 10:18:47.721088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.721100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:07:40.879 [2024-12-09 10:18:47.721112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:07:40.879 [2024-12-09 10:18:47.721124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.721136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:07:40.879 [2024-12-09 10:18:47.721148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:07:40.879 [2024-12-09 10:18:47.721160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.721172] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:07:40.879 [2024-12-09 10:18:47.721186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:07:40.879 [2024-12-09 10:18:47.721198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:07:40.879 [2024-12-09 10:18:47.721211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:07:40.879 [2024-12-09 10:18:47.721224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:07:40.879 [2024-12-09 10:18:47.721238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:07:40.879 [2024-12-09 10:18:47.721266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:07:40.879 [2024-12-09 10:18:47.721282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:07:40.879 [2024-12-09 10:18:47.721294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:07:40.879 [2024-12-09 10:18:47.721307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:07:40.879 [2024-12-09 10:18:47.721321] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:07:40.879 [2024-12-09 10:18:47.721338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:07:40.879 [2024-12-09 10:18:47.721365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:07:40.879 [2024-12-09 10:18:47.721404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:07:40.879 [2024-12-09 10:18:47.721417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:07:40.879 [2024-12-09 10:18:47.721430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:07:40.879 [2024-12-09 10:18:47.721442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:07:40.879 [2024-12-09 10:18:47.721533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:07:40.879 [2024-12-09 10:18:47.721548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:07:40.879 [2024-12-09 10:18:47.721582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:07:40.879 [2024-12-09 10:18:47.721595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:07:40.879 [2024-12-09 10:18:47.721608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:07:40.879 [2024-12-09 10:18:47.721623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.721636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:07:40.879 [2024-12-09 10:18:47.721650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.080 ms 01:07:40.879 [2024-12-09 10:18:47.721663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.763933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.763994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:07:40.879 [2024-12-09 10:18:47.764033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.195 ms 01:07:40.879 [2024-12-09 10:18:47.764047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.764175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.764192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:07:40.879 [2024-12-09 10:18:47.764206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 01:07:40.879 [2024-12-09 10:18:47.764219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.816890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.816945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:07:40.879 [2024-12-09 10:18:47.816992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.582 ms 01:07:40.879 [2024-12-09 10:18:47.817036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.817141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.817160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:07:40.879 [2024-12-09 10:18:47.817192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 01:07:40.879 [2024-12-09 10:18:47.817213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.817452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.817474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:07:40.879 [2024-12-09 10:18:47.817489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 01:07:40.879 [2024-12-09 10:18:47.817511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.817576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.817594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:07:40.879 [2024-12-09 10:18:47.817609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 01:07:40.879 [2024-12-09 10:18:47.817622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.842829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.843069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:07:40.879 [2024-12-09 10:18:47.843102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.164 ms 01:07:40.879 [2024-12-09 10:18:47.843127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.843374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.843403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 01:07:40.879 [2024-12-09 10:18:47.843420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 01:07:40.879 [2024-12-09 10:18:47.843434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.883404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.879 [2024-12-09 10:18:47.883481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 01:07:40.879 [2024-12-09 10:18:47.883504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.932 ms 01:07:40.879 [2024-12-09 10:18:47.883518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:40.879 [2024-12-09 10:18:47.897170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:40.880 [2024-12-09 10:18:47.897218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:07:40.880 [2024-12-09 10:18:47.897265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.807 ms 01:07:40.880 [2024-12-09 10:18:47.897283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.139 [2024-12-09 10:18:47.982842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.139 [2024-12-09 10:18:47.983152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 01:07:41.139 [2024-12-09 10:18:47.983188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.461 ms 01:07:41.139 [2024-12-09 10:18:47.983204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.139 [2024-12-09 10:18:47.983437] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 01:07:41.139 [2024-12-09 10:18:47.983589] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 01:07:41.139 [2024-12-09 10:18:47.983757] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 01:07:41.139 [2024-12-09 10:18:47.983893] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 01:07:41.139 [2024-12-09 10:18:47.983915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.139 [2024-12-09 10:18:47.983929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 01:07:41.139 [2024-12-09 10:18:47.983945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 01:07:41.139 [2024-12-09 10:18:47.983960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.139 [2024-12-09 10:18:47.984131] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 01:07:41.139 [2024-12-09 10:18:47.984171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.139 [2024-12-09 10:18:47.984230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 01:07:41.139 [2024-12-09 10:18:47.984244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 01:07:41.139 [2024-12-09 10:18:47.984274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.139 [2024-12-09 10:18:48.007601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.139 [2024-12-09 10:18:48.007848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 01:07:41.139 [2024-12-09 10:18:48.007895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.292 ms 01:07:41.139 [2024-12-09 10:18:48.007928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.139 [2024-12-09 10:18:48.021316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.139 [2024-12-09 10:18:48.021366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 01:07:41.139 [2024-12-09 10:18:48.021384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 01:07:41.139 [2024-12-09 10:18:48.021397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.139 [2024-12-09 10:18:48.021622] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 01:07:41.139 [2024-12-09 10:18:48.021893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.139 [2024-12-09 10:18:48.021917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 01:07:41.139 [2024-12-09 10:18:48.021933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 01:07:41.139 [2024-12-09 10:18:48.021947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.708 [2024-12-09 10:18:48.630611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.708 [2024-12-09 10:18:48.631027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 01:07:41.708 [2024-12-09 10:18:48.631063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 607.551 ms 01:07:41.708 [2024-12-09 10:18:48.631079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.708 [2024-12-09 10:18:48.636573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.708 [2024-12-09 10:18:48.636623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 01:07:41.708 [2024-12-09 10:18:48.636643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 01:07:41.708 [2024-12-09 10:18:48.636658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.708 [2024-12-09 10:18:48.637201] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 01:07:41.708 [2024-12-09 10:18:48.637238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.708 [2024-12-09 10:18:48.637270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 01:07:41.708 [2024-12-09 10:18:48.637286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 01:07:41.708 [2024-12-09 10:18:48.637300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.709 [2024-12-09 10:18:48.637446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.709 [2024-12-09 10:18:48.637471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 01:07:41.709 [2024-12-09 10:18:48.637486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:07:41.709 [2024-12-09 10:18:48.637508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:41.709 [2024-12-09 10:18:48.637566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 615.942 ms, result 0 01:07:41.709 [2024-12-09 10:18:48.637634] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 01:07:41.709 [2024-12-09 10:18:48.637751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:41.709 [2024-12-09 10:18:48.637767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 01:07:41.709 [2024-12-09 10:18:48.637781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.117 ms 01:07:41.709 [2024-12-09 10:18:48.637794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.228410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.228533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 01:07:42.278 [2024-12-09 10:18:49.228600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 589.408 ms 01:07:42.278 [2024-12-09 10:18:49.228620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.233869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.233919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 01:07:42.278 [2024-12-09 10:18:49.233973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.348 ms 01:07:42.278 [2024-12-09 10:18:49.233986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.234507] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 01:07:42.278 [2024-12-09 10:18:49.234538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.234552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 01:07:42.278 [2024-12-09 10:18:49.234566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 01:07:42.278 [2024-12-09 10:18:49.234580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.234667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.234688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 01:07:42.278 [2024-12-09 10:18:49.234703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:07:42.278 [2024-12-09 10:18:49.234715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.234787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 597.159 ms, result 0 01:07:42.278 [2024-12-09 10:18:49.234907] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 01:07:42.278 [2024-12-09 10:18:49.234929] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 01:07:42.278 [2024-12-09 10:18:49.234946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.234960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 01:07:42.278 [2024-12-09 10:18:49.234976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1213.374 ms 01:07:42.278 [2024-12-09 10:18:49.234989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.235037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.235063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 01:07:42.278 [2024-12-09 10:18:49.235077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:07:42.278 [2024-12-09 10:18:49.235091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.247497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:07:42.278 [2024-12-09 10:18:49.247695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.247714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:07:42.278 [2024-12-09 10:18:49.247728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.577 ms 01:07:42.278 [2024-12-09 10:18:49.247740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.248496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.248534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 01:07:42.278 [2024-12-09 10:18:49.248557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.655 ms 01:07:42.278 [2024-12-09 10:18:49.248571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.250979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.251012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 01:07:42.278 [2024-12-09 10:18:49.251042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.369 ms 01:07:42.278 [2024-12-09 10:18:49.251054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.251126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.251144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 01:07:42.278 [2024-12-09 10:18:49.251157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 01:07:42.278 [2024-12-09 10:18:49.251187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.251386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.251406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:07:42.278 [2024-12-09 10:18:49.251420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 01:07:42.278 [2024-12-09 10:18:49.251432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.251462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.251476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:07:42.278 [2024-12-09 10:18:49.251505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:07:42.278 [2024-12-09 10:18:49.251534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.278 [2024-12-09 10:18:49.251612] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 01:07:42.278 [2024-12-09 10:18:49.251646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.278 [2024-12-09 10:18:49.251660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 01:07:42.278 [2024-12-09 10:18:49.251675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 01:07:42.278 [2024-12-09 10:18:49.251702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.279 [2024-12-09 10:18:49.251771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:42.279 [2024-12-09 10:18:49.251788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:07:42.279 [2024-12-09 10:18:49.251802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 01:07:42.279 [2024-12-09 10:18:49.251815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:42.279 [2024-12-09 10:18:49.253161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1576.735 ms, result 0 01:07:42.279 [2024-12-09 10:18:49.268324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:42.279 [2024-12-09 10:18:49.284364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:07:42.279 [2024-12-09 10:18:49.293851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:07:42.538 10:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:42.539 Validate MD5 checksum, iteration 1 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:07:42.539 10:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:07:42.539 [2024-12-09 10:18:49.445636] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:42.539 [2024-12-09 10:18:49.446174] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85042 ] 01:07:42.798 [2024-12-09 10:18:49.638754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:42.798 [2024-12-09 10:18:49.800011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:44.703  [2024-12-09T10:18:52.683Z] Copying: 438/1024 [MB] (438 MBps) [2024-12-09T10:18:52.942Z] Copying: 888/1024 [MB] (450 MBps) [2024-12-09T10:18:54.945Z] Copying: 1024/1024 [MB] (average 437 MBps) 01:07:47.901 01:07:47.901 10:18:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 01:07:47.901 10:18:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:07:49.808 Validate MD5 checksum, iteration 2 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ce5356749ae4b8426faebec67a2e7292 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ce5356749ae4b8426faebec67a2e7292 != \c\e\5\3\5\6\7\4\9\a\e\4\b\8\4\2\6\f\a\e\b\e\c\6\7\a\2\e\7\2\9\2 ]] 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:07:49.808 10:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:07:49.808 [2024-12-09 10:18:56.805144] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:49.808 [2024-12-09 10:18:56.805668] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85116 ] 01:07:50.066 [2024-12-09 10:18:56.993922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:50.324 [2024-12-09 10:18:57.161723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:52.225  [2024-12-09T10:18:59.836Z] Copying: 477/1024 [MB] (477 MBps) [2024-12-09T10:19:00.095Z] Copying: 918/1024 [MB] (441 MBps) [2024-12-09T10:19:01.469Z] Copying: 1024/1024 [MB] (average 452 MBps) 01:07:54.425 01:07:54.425 10:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 01:07:54.425 10:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0c310ab9906746d428bce2ebc0c36a86 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0c310ab9906746d428bce2ebc0c36a86 != \0\c\3\1\0\a\b\9\9\0\6\7\4\6\d\4\2\8\b\c\e\2\e\b\c\0\c\3\6\a\8\6 ]] 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 01:07:56.330 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85003 ]] 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85003 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85003 ']' 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85003 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85003 01:07:56.589 killing process with pid 85003 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85003' 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85003 01:07:56.589 10:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85003 01:07:57.524 [2024-12-09 10:19:04.382828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 01:07:57.524 [2024-12-09 10:19:04.400689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.524 [2024-12-09 10:19:04.400738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 01:07:57.524 [2024-12-09 10:19:04.400777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:07:57.524 [2024-12-09 10:19:04.400790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.524 [2024-12-09 10:19:04.400821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 01:07:57.524 [2024-12-09 10:19:04.404298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.524 [2024-12-09 10:19:04.404338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 01:07:57.524 [2024-12-09 10:19:04.404371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.455 ms 01:07:57.524 [2024-12-09 10:19:04.404385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.524 [2024-12-09 10:19:04.404650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.524 [2024-12-09 10:19:04.404671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 01:07:57.524 [2024-12-09 10:19:04.404685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 01:07:57.524 [2024-12-09 10:19:04.404698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.524 [2024-12-09 10:19:04.406007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.406070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 01:07:57.525 [2024-12-09 10:19:04.406098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.287 ms 01:07:57.525 [2024-12-09 10:19:04.406134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.407469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.407672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 01:07:57.525 [2024-12-09 10:19:04.407714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 01:07:57.525 [2024-12-09 10:19:04.407728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.419486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.419529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 01:07:57.525 [2024-12-09 10:19:04.419571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.692 ms 01:07:57.525 [2024-12-09 10:19:04.419584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.425880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.425923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 01:07:57.525 [2024-12-09 10:19:04.425957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.254 ms 01:07:57.525 [2024-12-09 10:19:04.425970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.426053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.426071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 01:07:57.525 [2024-12-09 10:19:04.426110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 01:07:57.525 [2024-12-09 10:19:04.426131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.437215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.437281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 01:07:57.525 [2024-12-09 10:19:04.437315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.060 ms 01:07:57.525 [2024-12-09 10:19:04.437327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.448429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.448601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 01:07:57.525 [2024-12-09 10:19:04.448644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.057 ms 01:07:57.525 [2024-12-09 10:19:04.448657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.459913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.459956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 01:07:57.525 [2024-12-09 10:19:04.459973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.211 ms 01:07:57.525 [2024-12-09 10:19:04.459986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.471121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.471318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 01:07:57.525 [2024-12-09 10:19:04.471362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.057 ms 01:07:57.525 [2024-12-09 10:19:04.471377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.471423] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 01:07:57.525 [2024-12-09 10:19:04.471447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:07:57.525 [2024-12-09 10:19:04.471463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 01:07:57.525 [2024-12-09 10:19:04.471477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 01:07:57.525 [2024-12-09 10:19:04.471491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:07:57.525 [2024-12-09 10:19:04.471708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 01:07:57.525 [2024-12-09 10:19:04.471720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 010e698c-d7c3-41c5-90c6-8788d7ba5096 01:07:57.525 [2024-12-09 10:19:04.471733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 01:07:57.525 [2024-12-09 10:19:04.471745] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 01:07:57.525 [2024-12-09 10:19:04.471758] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 01:07:57.525 [2024-12-09 10:19:04.471770] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 01:07:57.525 [2024-12-09 10:19:04.471783] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 01:07:57.525 [2024-12-09 10:19:04.471795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 01:07:57.525 [2024-12-09 10:19:04.471815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 01:07:57.525 [2024-12-09 10:19:04.471826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 01:07:57.525 [2024-12-09 10:19:04.471852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 01:07:57.525 [2024-12-09 10:19:04.471864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.471878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 01:07:57.525 [2024-12-09 10:19:04.471891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.443 ms 01:07:57.525 [2024-12-09 10:19:04.471903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.487372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.487412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 01:07:57.525 [2024-12-09 10:19:04.487446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.443 ms 01:07:57.525 [2024-12-09 10:19:04.487459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.487898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:07:57.525 [2024-12-09 10:19:04.487919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 01:07:57.525 [2024-12-09 10:19:04.487933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 01:07:57.525 [2024-12-09 10:19:04.487946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.539813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.525 [2024-12-09 10:19:04.539873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:07:57.525 [2024-12-09 10:19:04.539908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.525 [2024-12-09 10:19:04.539928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.539982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.525 [2024-12-09 10:19:04.539998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:07:57.525 [2024-12-09 10:19:04.540011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.525 [2024-12-09 10:19:04.540023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.540132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.525 [2024-12-09 10:19:04.540152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:07:57.525 [2024-12-09 10:19:04.540165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.525 [2024-12-09 10:19:04.540178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.525 [2024-12-09 10:19:04.540210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.525 [2024-12-09 10:19:04.540224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:07:57.525 [2024-12-09 10:19:04.540237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.525 [2024-12-09 10:19:04.540249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.783 [2024-12-09 10:19:04.639276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.639345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:07:57.784 [2024-12-09 10:19:04.639381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.639395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.717159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.717220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:07:57.784 [2024-12-09 10:19:04.717257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.717308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.717483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.717504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:07:57.784 [2024-12-09 10:19:04.717519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.717532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.717597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.717643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:07:57.784 [2024-12-09 10:19:04.717658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.717671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.717816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.717841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:07:57.784 [2024-12-09 10:19:04.717856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.717868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.717920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.717938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 01:07:57.784 [2024-12-09 10:19:04.717958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.717972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.718019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.718036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:07:57.784 [2024-12-09 10:19:04.718050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.718062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.718145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:07:57.784 [2024-12-09 10:19:04.718170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:07:57.784 [2024-12-09 10:19:04.718185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:07:57.784 [2024-12-09 10:19:04.718198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:07:57.784 [2024-12-09 10:19:04.718367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 317.635 ms, result 0 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 01:07:59.158 Remove shared memory files 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84785 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 01:07:59.158 ************************************ 01:07:59.158 END TEST ftl_upgrade_shutdown 01:07:59.158 ************************************ 01:07:59.158 01:07:59.158 real 1m35.875s 01:07:59.158 user 2m17.534s 01:07:59.158 sys 0m24.421s 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:59.158 10:19:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@14 -- # killprocess 77124 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 77124 ']' 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@958 -- # kill -0 77124 01:07:59.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77124) - No such process 01:07:59.158 Process with pid 77124 is not found 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77124 is not found' 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85241 01:07:59.158 10:19:05 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85241 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 85241 ']' 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:59.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:59.158 10:19:05 ftl -- common/autotest_common.sh@10 -- # set +x 01:07:59.158 [2024-12-09 10:19:06.003089] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:07:59.158 [2024-12-09 10:19:06.003308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85241 ] 01:07:59.158 [2024-12-09 10:19:06.182047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:59.416 [2024-12-09 10:19:06.301706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:00.352 10:19:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:00.352 10:19:07 ftl -- common/autotest_common.sh@868 -- # return 0 01:08:00.352 10:19:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:08:00.610 nvme0n1 01:08:00.610 10:19:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 01:08:00.610 10:19:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:08:00.610 10:19:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:08:00.869 10:19:07 ftl -- ftl/common.sh@28 -- # stores=6589e53e-70cc-4a8a-b860-805c2979ad6c 01:08:00.869 10:19:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 01:08:00.869 10:19:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6589e53e-70cc-4a8a-b860-805c2979ad6c 01:08:01.127 10:19:07 ftl -- ftl/ftl.sh@23 -- # killprocess 85241 01:08:01.127 10:19:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 85241 ']' 01:08:01.127 10:19:07 ftl -- common/autotest_common.sh@958 -- # kill -0 85241 01:08:01.127 10:19:07 ftl -- common/autotest_common.sh@959 -- # uname 01:08:01.127 10:19:07 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:01.127 10:19:07 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85241 01:08:01.127 killing process with pid 85241 01:08:01.127 10:19:08 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:01.127 10:19:08 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:01.127 10:19:08 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85241' 01:08:01.127 10:19:08 ftl -- common/autotest_common.sh@973 -- # kill 85241 01:08:01.127 10:19:08 ftl -- common/autotest_common.sh@978 -- # wait 85241 01:08:03.031 10:19:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:08:03.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:08:03.290 Waiting for block devices as requested 01:08:03.290 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:08:03.549 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:08:03.549 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:08:03.549 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:08:08.817 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:08:08.817 Remove shared memory files 01:08:08.817 10:19:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 01:08:08.817 10:19:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 01:08:08.817 10:19:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 01:08:08.817 10:19:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 01:08:08.817 10:19:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 01:08:08.817 10:19:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:08:08.817 10:19:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 01:08:08.817 ************************************ 01:08:08.817 END TEST ftl 01:08:08.817 ************************************ 01:08:08.817 01:08:08.817 real 12m25.202s 01:08:08.817 user 15m23.961s 01:08:08.817 sys 1m40.669s 01:08:08.817 10:19:15 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:08.817 10:19:15 ftl -- common/autotest_common.sh@10 -- # set +x 01:08:08.817 10:19:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:08:08.817 10:19:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:08:08.817 10:19:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:08:08.817 10:19:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:08:08.817 10:19:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:08:08.818 10:19:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:08:08.818 10:19:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:08:08.818 10:19:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:08:08.818 10:19:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:08:08.818 10:19:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:08:08.818 10:19:15 -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:08.818 10:19:15 -- common/autotest_common.sh@10 -- # set +x 01:08:08.818 10:19:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:08:08.818 10:19:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:08:08.818 10:19:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:08:08.818 10:19:15 -- common/autotest_common.sh@10 -- # set +x 01:08:10.719 INFO: APP EXITING 01:08:10.719 INFO: killing all VMs 01:08:10.719 INFO: killing vhost app 01:08:10.719 INFO: EXIT DONE 01:08:10.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:08:11.286 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:08:11.286 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:08:11.286 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:08:11.286 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:08:11.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:08:12.112 Cleaning 01:08:12.113 Removing: /var/run/dpdk/spdk0/config 01:08:12.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:08:12.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:08:12.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:08:12.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:08:12.113 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:08:12.113 Removing: /var/run/dpdk/spdk0/hugepage_info 01:08:12.113 Removing: /var/run/dpdk/spdk0 01:08:12.113 Removing: /var/run/dpdk/spdk_pid57961 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58196 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58431 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58535 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58585 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58719 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58737 01:08:12.113 Removing: /var/run/dpdk/spdk_pid58947 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59059 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59171 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59293 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59396 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59435 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59477 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59548 01:08:12.113 Removing: /var/run/dpdk/spdk_pid59637 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60130 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60205 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60274 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60295 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60443 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60465 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60626 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60642 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60706 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60735 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60799 01:08:12.113 Removing: /var/run/dpdk/spdk_pid60817 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61018 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61054 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61143 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61336 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61433 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61475 01:08:12.113 Removing: /var/run/dpdk/spdk_pid61986 01:08:12.113 Removing: /var/run/dpdk/spdk_pid62084 01:08:12.113 Removing: /var/run/dpdk/spdk_pid62199 01:08:12.113 Removing: /var/run/dpdk/spdk_pid62252 01:08:12.113 Removing: /var/run/dpdk/spdk_pid62283 01:08:12.113 Removing: /var/run/dpdk/spdk_pid62367 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63002 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63045 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63571 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63673 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63789 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63849 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63874 01:08:12.113 Removing: /var/run/dpdk/spdk_pid63905 01:08:12.113 Removing: /var/run/dpdk/spdk_pid65792 01:08:12.113 Removing: /var/run/dpdk/spdk_pid65935 01:08:12.113 Removing: /var/run/dpdk/spdk_pid65939 01:08:12.113 Removing: /var/run/dpdk/spdk_pid65956 01:08:12.113 Removing: /var/run/dpdk/spdk_pid65995 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66005 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66017 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66060 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66065 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66077 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66122 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66126 01:08:12.113 Removing: /var/run/dpdk/spdk_pid66138 01:08:12.113 Removing: /var/run/dpdk/spdk_pid67552 01:08:12.113 Removing: /var/run/dpdk/spdk_pid67667 01:08:12.113 Removing: /var/run/dpdk/spdk_pid69077 01:08:12.113 Removing: /var/run/dpdk/spdk_pid70822 01:08:12.113 Removing: /var/run/dpdk/spdk_pid70897 01:08:12.113 Removing: /var/run/dpdk/spdk_pid70983 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71097 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71190 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71300 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71381 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71462 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71566 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71669 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71765 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71851 01:08:12.113 Removing: /var/run/dpdk/spdk_pid71932 01:08:12.113 Removing: /var/run/dpdk/spdk_pid72042 01:08:12.113 Removing: /var/run/dpdk/spdk_pid72134 01:08:12.113 Removing: /var/run/dpdk/spdk_pid72241 01:08:12.113 Removing: /var/run/dpdk/spdk_pid72318 01:08:12.113 Removing: /var/run/dpdk/spdk_pid72399 01:08:12.373 Removing: /var/run/dpdk/spdk_pid72509 01:08:12.374 Removing: /var/run/dpdk/spdk_pid72607 01:08:12.374 Removing: /var/run/dpdk/spdk_pid72715 01:08:12.374 Removing: /var/run/dpdk/spdk_pid72796 01:08:12.374 Removing: /var/run/dpdk/spdk_pid72879 01:08:12.374 Removing: /var/run/dpdk/spdk_pid72961 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73035 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73144 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73241 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73341 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73427 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73502 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73583 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73663 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73767 01:08:12.374 Removing: /var/run/dpdk/spdk_pid73863 01:08:12.374 Removing: /var/run/dpdk/spdk_pid74007 01:08:12.374 Removing: /var/run/dpdk/spdk_pid74297 01:08:12.374 Removing: /var/run/dpdk/spdk_pid74339 01:08:12.374 Removing: /var/run/dpdk/spdk_pid74826 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75019 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75120 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75232 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75290 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75317 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75602 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75668 01:08:12.374 Removing: /var/run/dpdk/spdk_pid75759 01:08:12.374 Removing: /var/run/dpdk/spdk_pid76183 01:08:12.374 Removing: /var/run/dpdk/spdk_pid76324 01:08:12.374 Removing: /var/run/dpdk/spdk_pid77124 01:08:12.374 Removing: /var/run/dpdk/spdk_pid77273 01:08:12.374 Removing: /var/run/dpdk/spdk_pid77478 01:08:12.374 Removing: /var/run/dpdk/spdk_pid77583 01:08:12.374 Removing: /var/run/dpdk/spdk_pid77945 01:08:12.374 Removing: /var/run/dpdk/spdk_pid78217 01:08:12.374 Removing: /var/run/dpdk/spdk_pid78569 01:08:12.374 Removing: /var/run/dpdk/spdk_pid78775 01:08:12.374 Removing: /var/run/dpdk/spdk_pid78911 01:08:12.374 Removing: /var/run/dpdk/spdk_pid78976 01:08:12.374 Removing: /var/run/dpdk/spdk_pid79125 01:08:12.374 Removing: /var/run/dpdk/spdk_pid79173 01:08:12.374 Removing: /var/run/dpdk/spdk_pid79237 01:08:12.374 Removing: /var/run/dpdk/spdk_pid79453 01:08:12.374 Removing: /var/run/dpdk/spdk_pid79720 01:08:12.374 Removing: /var/run/dpdk/spdk_pid80156 01:08:12.374 Removing: /var/run/dpdk/spdk_pid80633 01:08:12.374 Removing: /var/run/dpdk/spdk_pid81076 01:08:12.374 Removing: /var/run/dpdk/spdk_pid81624 01:08:12.374 Removing: /var/run/dpdk/spdk_pid81773 01:08:12.374 Removing: /var/run/dpdk/spdk_pid81883 01:08:12.374 Removing: /var/run/dpdk/spdk_pid82588 01:08:12.374 Removing: /var/run/dpdk/spdk_pid82680 01:08:12.374 Removing: /var/run/dpdk/spdk_pid83163 01:08:12.374 Removing: /var/run/dpdk/spdk_pid83629 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84166 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84290 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84358 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84431 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84501 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84572 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84785 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84866 01:08:12.374 Removing: /var/run/dpdk/spdk_pid84934 01:08:12.374 Removing: /var/run/dpdk/spdk_pid85003 01:08:12.374 Removing: /var/run/dpdk/spdk_pid85042 01:08:12.374 Removing: /var/run/dpdk/spdk_pid85116 01:08:12.374 Removing: /var/run/dpdk/spdk_pid85241 01:08:12.374 Clean 01:08:12.632 10:19:19 -- common/autotest_common.sh@1453 -- # return 0 01:08:12.632 10:19:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:08:12.632 10:19:19 -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:12.632 10:19:19 -- common/autotest_common.sh@10 -- # set +x 01:08:12.632 10:19:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:08:12.632 10:19:19 -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:12.632 10:19:19 -- common/autotest_common.sh@10 -- # set +x 01:08:12.632 10:19:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:08:12.632 10:19:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:08:12.632 10:19:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:08:12.632 10:19:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:08:12.632 10:19:19 -- spdk/autotest.sh@398 -- # hostname 01:08:12.632 10:19:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:08:12.890 geninfo: WARNING: invalid characters removed from testname! 01:08:39.447 10:19:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:08:43.635 10:19:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:08:46.165 10:19:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:08:48.696 10:19:55 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:08:51.226 10:19:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:08:54.513 10:20:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:08:57.044 10:20:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:08:57.044 10:20:04 -- spdk/autorun.sh@1 -- $ timing_finish 01:08:57.044 10:20:04 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:08:57.044 10:20:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:08:57.044 10:20:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:08:57.044 10:20:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:08:57.044 + [[ -n 5408 ]] 01:08:57.044 + sudo kill 5408 01:08:57.311 [Pipeline] } 01:08:57.323 [Pipeline] // timeout 01:08:57.330 [Pipeline] } 01:08:57.343 [Pipeline] // stage 01:08:57.348 [Pipeline] } 01:08:57.362 [Pipeline] // catchError 01:08:57.370 [Pipeline] stage 01:08:57.373 [Pipeline] { (Stop VM) 01:08:57.383 [Pipeline] sh 01:08:57.658 + vagrant halt 01:09:01.851 ==> default: Halting domain... 01:09:08.471 [Pipeline] sh 01:09:08.750 + vagrant destroy -f 01:09:12.935 ==> default: Removing domain... 01:09:13.881 [Pipeline] sh 01:09:14.159 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 01:09:14.167 [Pipeline] } 01:09:14.182 [Pipeline] // stage 01:09:14.187 [Pipeline] } 01:09:14.200 [Pipeline] // dir 01:09:14.205 [Pipeline] } 01:09:14.219 [Pipeline] // wrap 01:09:14.225 [Pipeline] } 01:09:14.237 [Pipeline] // catchError 01:09:14.246 [Pipeline] stage 01:09:14.248 [Pipeline] { (Epilogue) 01:09:14.270 [Pipeline] sh 01:09:14.552 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:09:21.129 [Pipeline] catchError 01:09:21.130 [Pipeline] { 01:09:21.142 [Pipeline] sh 01:09:21.422 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:09:21.422 Artifacts sizes are good 01:09:21.430 [Pipeline] } 01:09:21.443 [Pipeline] // catchError 01:09:21.453 [Pipeline] archiveArtifacts 01:09:21.459 Archiving artifacts 01:09:21.560 [Pipeline] cleanWs 01:09:21.571 [WS-CLEANUP] Deleting project workspace... 01:09:21.571 [WS-CLEANUP] Deferred wipeout is used... 01:09:21.577 [WS-CLEANUP] done 01:09:21.579 [Pipeline] } 01:09:21.594 [Pipeline] // stage 01:09:21.599 [Pipeline] } 01:09:21.612 [Pipeline] // node 01:09:21.617 [Pipeline] End of Pipeline 01:09:21.653 Finished: SUCCESS