00:00:00.001 Started by upstream project "autotest-nightly" build number 4341 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3704 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.185 Fetching changes from the remote Git repository 00:00:00.188 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.227 Using shallow fetch with depth 1 00:00:00.227 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.227 > git --version # timeout=10 00:00:00.277 > git --version # 'git version 2.39.2' 00:00:00.277 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.305 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.305 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.972 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.983 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.997 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.997 > git config core.sparsecheckout # timeout=10 00:00:08.008 > git read-tree -mu HEAD # timeout=10 00:00:08.025 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.055 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.055 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.202 [Pipeline] Start of Pipeline 00:00:08.214 [Pipeline] library 00:00:08.215 Loading library shm_lib@master 00:00:08.215 Library shm_lib@master is cached. Copying from home. 00:00:08.228 [Pipeline] node 00:00:08.246 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.247 [Pipeline] { 00:00:08.254 [Pipeline] catchError 00:00:08.255 [Pipeline] { 00:00:08.264 [Pipeline] wrap 00:00:08.270 [Pipeline] { 00:00:08.276 [Pipeline] stage 00:00:08.277 [Pipeline] { (Prologue) 00:00:08.288 [Pipeline] echo 00:00:08.289 Node: VM-host-SM38 00:00:08.293 [Pipeline] cleanWs 00:00:08.303 [WS-CLEANUP] Deleting project workspace... 00:00:08.303 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.310 [WS-CLEANUP] done 00:00:08.514 [Pipeline] setCustomBuildProperty 00:00:08.589 [Pipeline] httpRequest 00:00:09.534 [Pipeline] echo 00:00:09.536 Sorcerer 10.211.164.101 is alive 00:00:09.546 [Pipeline] retry 00:00:09.548 [Pipeline] { 00:00:09.561 [Pipeline] httpRequest 00:00:09.566 HttpMethod: GET 00:00:09.566 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.567 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.568 Response Code: HTTP/1.1 200 OK 00:00:09.569 Success: Status code 200 is in the accepted range: 200,404 00:00:09.569 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.070 [Pipeline] } 00:00:10.084 [Pipeline] // retry 00:00:10.092 [Pipeline] sh 00:00:10.376 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.393 [Pipeline] httpRequest 00:00:10.733 [Pipeline] echo 00:00:10.735 Sorcerer 10.211.164.20 is alive 00:00:10.747 [Pipeline] retry 00:00:10.749 [Pipeline] { 00:00:10.767 [Pipeline] httpRequest 00:00:10.773 HttpMethod: GET 00:00:10.773 URL: http://10.211.164.20/packages/spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:00:10.774 Sending request to url: http://10.211.164.20/packages/spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:00:10.775 Response Code: HTTP/1.1 404 Not Found 00:00:10.775 Success: Status code 404 is in the accepted range: 200,404 00:00:10.776 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:00:10.783 [Pipeline] } 00:00:10.805 [Pipeline] // retry 00:00:10.814 [Pipeline] sh 00:00:11.102 + rm -f spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:00:11.117 [Pipeline] retry 00:00:11.119 [Pipeline] { 00:00:11.141 [Pipeline] checkout 00:00:11.152 The recommended git tool is: NONE 00:00:11.184 using credential 00000000-0000-0000-0000-000000000002 00:00:11.187 Wiping out workspace first. 00:00:11.196 Cloning the remote Git repository 00:00:11.199 Honoring refspec on initial clone 00:00:11.204 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:11.206 > git init /var/jenkins/workspace/nvme-vg-autotest/spdk # timeout=10 00:00:11.221 Using reference repository: /var/ci_repos/spdk_multi 00:00:11.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:11.221 > git --version # timeout=10 00:00:11.225 > git --version # 'git version 2.25.1' 00:00:11.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:11.229 Setting http proxy: proxy-dmz.intel.com:911 00:00:11.229 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/heads/master +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:33.221 Avoid second fetch 00:00:33.277 Checking out Revision 0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6 (FETCH_HEAD) 00:00:33.636 Commit message: "nvme/rdma: Force qp disconnect on pg remove" 00:00:33.199 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:33.204 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:00:33.208 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:33.223 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:33.238 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:33.280 > git config core.sparsecheckout # timeout=10 00:00:33.283 > git checkout -f 0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6 # timeout=10 00:00:33.638 > git rev-list --no-walk 8d3947977640da882a3cdcc21a7575115b7e7787 # timeout=10 00:00:33.667 > git remote # timeout=10 00:00:33.670 > git submodule init # timeout=10 00:00:33.747 > git submodule sync # timeout=10 00:00:33.818 > git config --get remote.origin.url # timeout=10 00:00:33.826 > git submodule init # timeout=10 00:00:33.898 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:33.902 > git config --get submodule.dpdk.url # timeout=10 00:00:33.906 > git remote # timeout=10 00:00:33.910 > git config --get remote.origin.url # timeout=10 00:00:33.914 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:33.917 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:33.921 > git remote # timeout=10 00:00:33.924 > git config --get remote.origin.url # timeout=10 00:00:33.928 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:33.931 > git config --get submodule.isa-l.url # timeout=10 00:00:33.935 > git remote # timeout=10 00:00:33.939 > git config --get remote.origin.url # timeout=10 00:00:33.942 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:33.945 > git config --get submodule.ocf.url # timeout=10 00:00:33.949 > git remote # timeout=10 00:00:33.952 > git config --get remote.origin.url # timeout=10 00:00:33.956 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:33.959 > git config --get submodule.libvfio-user.url # timeout=10 00:00:33.962 > git remote # timeout=10 00:00:33.965 > git config --get remote.origin.url # timeout=10 00:00:33.969 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:33.972 > git config --get submodule.xnvme.url # timeout=10 00:00:33.976 > git remote # timeout=10 00:00:33.980 > git config --get remote.origin.url # timeout=10 00:00:33.983 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:33.987 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:33.990 > git remote # timeout=10 00:00:33.994 > git config --get remote.origin.url # timeout=10 00:00:33.997 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:34.006 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.007 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.007 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.007 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.007 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.008 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.008 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:34.011 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.011 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:34.011 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.012 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:34.012 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.012 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:34.012 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.012 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:34.012 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.012 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:34.013 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.013 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:34.013 Setting http proxy: proxy-dmz.intel.com:911 00:00:34.013 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:42.577 [Pipeline] dir 00:00:42.578 Running in /var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:42.579 [Pipeline] { 00:00:42.592 [Pipeline] sh 00:00:42.875 ++ nproc 00:00:42.875 + threads=144 00:00:42.875 + git repack -a -d --threads=144 00:00:49.462 + git submodule foreach git repack -a -d --threads=144 00:00:49.462 Entering 'dpdk' 00:00:52.766 Entering 'intel-ipsec-mb' 00:00:53.027 Entering 'isa-l' 00:00:53.288 Entering 'isa-l-crypto' 00:00:53.550 Entering 'libvfio-user' 00:00:53.812 Entering 'ocf' 00:00:54.073 Entering 'xnvme' 00:00:54.647 + find .git -type f -name alternates -print -delete 00:00:54.647 .git/objects/info/alternates 00:00:54.647 .git/modules/isa-l/objects/info/alternates 00:00:54.647 .git/modules/dpdk/objects/info/alternates 00:00:54.647 .git/modules/xnvme/objects/info/alternates 00:00:54.647 .git/modules/libvfio-user/objects/info/alternates 00:00:54.647 .git/modules/intel-ipsec-mb/objects/info/alternates 00:00:54.647 .git/modules/ocf/objects/info/alternates 00:00:54.647 .git/modules/isa-l-crypto/objects/info/alternates 00:00:54.657 [Pipeline] } 00:00:54.675 [Pipeline] // dir 00:00:54.681 [Pipeline] } 00:00:54.697 [Pipeline] // retry 00:00:54.704 [Pipeline] sh 00:00:54.990 + hash pigz 00:00:54.990 + tar -czf spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz spdk 00:01:07.218 [Pipeline] retry 00:01:07.221 [Pipeline] { 00:01:07.239 [Pipeline] httpRequest 00:01:07.248 HttpMethod: PUT 00:01:07.248 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:01:07.249 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:01:09.995 Response Code: HTTP/1.1 200 OK 00:01:10.002 Success: Status code 200 is in the accepted range: 200 00:01:10.005 [Pipeline] } 00:01:10.022 [Pipeline] // retry 00:01:10.029 [Pipeline] echo 00:01:10.031 00:01:10.031 Locking 00:01:10.031 Waited 0s for lock 00:01:10.031 Everything Fine. Saved: /storage/packages/spdk_0354bb8e855e8a9c8d29c7ca3ed0bf52513325c6.tar.gz 00:01:10.031 00:01:10.035 [Pipeline] sh 00:01:10.321 + git -C spdk log --oneline -n5 00:01:10.321 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:10.321 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:10.321 60adca7e1 lib/mlx5: API to configure UMR 00:01:10.321 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:01:10.321 5469bd2d1 nvmf/rdma: Fix destroy of uninitialized qpair 00:01:10.343 [Pipeline] writeFile 00:01:10.359 [Pipeline] sh 00:01:10.645 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.660 [Pipeline] sh 00:01:10.944 + cat autorun-spdk.conf 00:01:10.944 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.944 SPDK_TEST_NVME=1 00:01:10.944 SPDK_TEST_FTL=1 00:01:10.944 SPDK_TEST_ISAL=1 00:01:10.944 SPDK_RUN_ASAN=1 00:01:10.944 SPDK_RUN_UBSAN=1 00:01:10.944 SPDK_TEST_XNVME=1 00:01:10.944 SPDK_TEST_NVME_FDP=1 00:01:10.944 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.954 RUN_NIGHTLY=1 00:01:10.956 [Pipeline] } 00:01:10.972 [Pipeline] // stage 00:01:10.990 [Pipeline] stage 00:01:10.993 [Pipeline] { (Run VM) 00:01:11.007 [Pipeline] sh 00:01:11.296 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:11.296 + echo 'Start stage prepare_nvme.sh' 00:01:11.296 Start stage prepare_nvme.sh 00:01:11.296 + [[ -n 8 ]] 00:01:11.296 + disk_prefix=ex8 00:01:11.296 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:11.296 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:11.296 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:11.296 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.296 ++ SPDK_TEST_NVME=1 00:01:11.296 ++ SPDK_TEST_FTL=1 00:01:11.296 ++ SPDK_TEST_ISAL=1 00:01:11.296 ++ SPDK_RUN_ASAN=1 00:01:11.296 ++ SPDK_RUN_UBSAN=1 00:01:11.296 ++ SPDK_TEST_XNVME=1 00:01:11.296 ++ SPDK_TEST_NVME_FDP=1 00:01:11.296 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.296 ++ RUN_NIGHTLY=1 00:01:11.296 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:11.296 + nvme_files=() 00:01:11.296 + declare -A nvme_files 00:01:11.296 + backend_dir=/var/lib/libvirt/images/backends 00:01:11.296 + nvme_files['nvme.img']=5G 00:01:11.296 + nvme_files['nvme-cmb.img']=5G 00:01:11.296 + nvme_files['nvme-multi0.img']=4G 00:01:11.296 + nvme_files['nvme-multi1.img']=4G 00:01:11.296 + nvme_files['nvme-multi2.img']=4G 00:01:11.296 + nvme_files['nvme-openstack.img']=8G 00:01:11.296 + nvme_files['nvme-zns.img']=5G 00:01:11.296 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:11.296 + (( SPDK_TEST_FTL == 1 )) 00:01:11.296 + nvme_files["nvme-ftl.img"]=6G 00:01:11.296 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:11.296 + nvme_files["nvme-fdp.img"]=1G 00:01:11.296 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:11.296 + for nvme in "${!nvme_files[@]}" 00:01:11.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:11.296 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.296 + for nvme in "${!nvme_files[@]}" 00:01:11.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:11.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:11.558 + for nvme in "${!nvme_files[@]}" 00:01:11.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:11.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.558 + for nvme in "${!nvme_files[@]}" 00:01:11.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:11.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.558 + for nvme in "${!nvme_files[@]}" 00:01:11.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:11.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.558 + for nvme in "${!nvme_files[@]}" 00:01:11.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:11.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.558 + for nvme in "${!nvme_files[@]}" 00:01:11.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:11.819 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.819 + for nvme in "${!nvme_files[@]}" 00:01:11.819 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:11.819 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:11.819 + for nvme in "${!nvme_files[@]}" 00:01:11.819 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:12.081 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.081 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:12.081 + echo 'End stage prepare_nvme.sh' 00:01:12.081 End stage prepare_nvme.sh 00:01:12.095 [Pipeline] sh 00:01:12.381 + DISTRO=fedora39 00:01:12.381 + CPUS=10 00:01:12.381 + RAM=12288 00:01:12.381 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.381 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:12.381 00:01:12.381 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:12.381 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:12.381 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:12.381 HELP=0 00:01:12.381 DRY_RUN=0 00:01:12.381 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:12.381 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:12.381 NVME_AUTO_CREATE=0 00:01:12.381 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:12.381 NVME_CMB=,,,, 00:01:12.381 NVME_PMR=,,,, 00:01:12.381 NVME_ZNS=,,,, 00:01:12.381 NVME_MS=true,,,, 00:01:12.381 NVME_FDP=,,,on, 00:01:12.381 SPDK_VAGRANT_DISTRO=fedora39 00:01:12.381 SPDK_VAGRANT_VMCPU=10 00:01:12.381 SPDK_VAGRANT_VMRAM=12288 00:01:12.381 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.381 SPDK_VAGRANT_HTTP_PROXY= 00:01:12.381 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.381 SPDK_OPENSTACK_NETWORK=0 00:01:12.381 VAGRANT_PACKAGE_BOX=0 00:01:12.381 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.381 FORCE_DISTRO=true 00:01:12.381 VAGRANT_BOX_VERSION= 00:01:12.381 EXTRA_VAGRANTFILES= 00:01:12.381 NIC_MODEL=e1000 00:01:12.381 00:01:12.381 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:12.381 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:14.924 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.220 ==> default: Creating image (snapshot of base box volume). 00:01:15.481 ==> default: Creating domain with the following settings... 00:01:15.481 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733516912_d0c73ae9cac937038cf0 00:01:15.481 ==> default: -- Domain type: kvm 00:01:15.481 ==> default: -- Cpus: 10 00:01:15.481 ==> default: -- Feature: acpi 00:01:15.481 ==> default: -- Feature: apic 00:01:15.481 ==> default: -- Feature: pae 00:01:15.481 ==> default: -- Memory: 12288M 00:01:15.481 ==> default: -- Memory Backing: hugepages: 00:01:15.481 ==> default: -- Management MAC: 00:01:15.481 ==> default: -- Loader: 00:01:15.481 ==> default: -- Nvram: 00:01:15.481 ==> default: -- Base box: spdk/fedora39 00:01:15.481 ==> default: -- Storage pool: default 00:01:15.481 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733516912_d0c73ae9cac937038cf0.img (20G) 00:01:15.481 ==> default: -- Volume Cache: default 00:01:15.481 ==> default: -- Kernel: 00:01:15.481 ==> default: -- Initrd: 00:01:15.481 ==> default: -- Graphics Type: vnc 00:01:15.481 ==> default: -- Graphics Port: -1 00:01:15.481 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.481 ==> default: -- Graphics Password: Not defined 00:01:15.481 ==> default: -- Video Type: cirrus 00:01:15.481 ==> default: -- Video VRAM: 9216 00:01:15.481 ==> default: -- Sound Type: 00:01:15.481 ==> default: -- Keymap: en-us 00:01:15.481 ==> default: -- TPM Path: 00:01:15.481 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.481 ==> default: -- Command line args: 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.481 ==> default: -> value=-drive, 00:01:15.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.481 ==> default: -> value=-drive, 00:01:15.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:15.481 ==> default: -> value=-drive, 00:01:15.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.481 ==> default: -> value=-drive, 00:01:15.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.481 ==> default: -> value=-drive, 00:01:15.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:15.481 ==> default: -> value=-drive, 00:01:15.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:15.481 ==> default: -> value=-device, 00:01:15.481 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.742 ==> default: Creating shared folders metadata... 00:01:15.742 ==> default: Starting domain. 00:01:17.654 ==> default: Waiting for domain to get an IP address... 00:01:35.846 ==> default: Waiting for SSH to become available... 00:01:35.846 ==> default: Configuring and enabling network interfaces... 00:01:39.146 default: SSH address: 192.168.121.223:22 00:01:39.146 default: SSH username: vagrant 00:01:39.146 default: SSH auth method: private key 00:01:41.067 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:49.239 ==> default: Mounting SSHFS shared folder... 00:01:51.150 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.150 ==> default: Checking Mount.. 00:01:52.091 ==> default: Folder Successfully Mounted! 00:01:52.091 00:01:52.091 SUCCESS! 00:01:52.091 00:01:52.091 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:52.091 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:52.091 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:52.091 00:01:52.101 [Pipeline] } 00:01:52.115 [Pipeline] // stage 00:01:52.122 [Pipeline] dir 00:01:52.123 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:52.124 [Pipeline] { 00:01:52.135 [Pipeline] catchError 00:01:52.136 [Pipeline] { 00:01:52.146 [Pipeline] sh 00:01:52.426 + vagrant ssh-config --host vagrant 00:01:52.426 + sed -ne '/^Host/,$p' 00:01:52.426 + tee ssh_conf 00:01:55.734 Host vagrant 00:01:55.734 HostName 192.168.121.223 00:01:55.734 User vagrant 00:01:55.734 Port 22 00:01:55.734 UserKnownHostsFile /dev/null 00:01:55.734 StrictHostKeyChecking no 00:01:55.734 PasswordAuthentication no 00:01:55.734 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:55.734 IdentitiesOnly yes 00:01:55.734 LogLevel FATAL 00:01:55.734 ForwardAgent yes 00:01:55.734 ForwardX11 yes 00:01:55.734 00:01:55.752 [Pipeline] withEnv 00:01:55.755 [Pipeline] { 00:01:55.772 [Pipeline] sh 00:01:56.059 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:56.059 source /etc/os-release 00:01:56.059 [[ -e /image.version ]] && img=$(< /image.version) 00:01:56.059 # Minimal, systemd-like check. 00:01:56.059 if [[ -e /.dockerenv ]]; then 00:01:56.059 # Clear garbage from the node'\''s name: 00:01:56.059 # agt-er_autotest_547-896 -> autotest_547-896 00:01:56.059 # $HOSTNAME is the actual container id 00:01:56.059 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:56.059 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:56.059 # We can assume this is a mount from a host where container is running, 00:01:56.059 # so fetch its hostname to easily identify the target swarm worker. 00:01:56.059 container="$(< /etc/hostname) ($agent)" 00:01:56.059 else 00:01:56.059 # Fallback 00:01:56.059 container=$agent 00:01:56.059 fi 00:01:56.059 fi 00:01:56.059 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:56.059 ' 00:01:56.335 [Pipeline] } 00:01:56.349 [Pipeline] // withEnv 00:01:56.356 [Pipeline] setCustomBuildProperty 00:01:56.368 [Pipeline] stage 00:01:56.370 [Pipeline] { (Tests) 00:01:56.385 [Pipeline] sh 00:01:56.682 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:56.964 [Pipeline] sh 00:01:57.247 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:57.525 [Pipeline] timeout 00:01:57.526 Timeout set to expire in 50 min 00:01:57.527 [Pipeline] { 00:01:57.539 [Pipeline] sh 00:01:57.820 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:58.393 HEAD is now at 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:58.406 [Pipeline] sh 00:01:58.688 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:58.962 [Pipeline] sh 00:01:59.243 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:59.523 [Pipeline] sh 00:01:59.875 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:59.875 ++ readlink -f spdk_repo 00:01:59.875 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:59.875 + [[ -n /home/vagrant/spdk_repo ]] 00:01:59.875 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:59.875 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:59.875 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:59.875 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:59.875 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:59.875 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:59.875 + cd /home/vagrant/spdk_repo 00:01:59.875 + source /etc/os-release 00:01:59.875 ++ NAME='Fedora Linux' 00:01:59.875 ++ VERSION='39 (Cloud Edition)' 00:01:59.875 ++ ID=fedora 00:01:59.875 ++ VERSION_ID=39 00:01:59.875 ++ VERSION_CODENAME= 00:01:59.875 ++ PLATFORM_ID=platform:f39 00:01:59.875 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:59.875 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:59.875 ++ LOGO=fedora-logo-icon 00:01:59.875 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:59.875 ++ HOME_URL=https://fedoraproject.org/ 00:01:59.875 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:59.875 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:59.875 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:59.875 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:59.875 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:59.875 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:59.875 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:59.875 ++ SUPPORT_END=2024-11-12 00:01:59.875 ++ VARIANT='Cloud Edition' 00:01:59.875 ++ VARIANT_ID=cloud 00:01:59.875 + uname -a 00:01:59.875 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:59.875 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:00.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:00.708 Hugepages 00:02:00.708 node hugesize free / total 00:02:00.708 node0 1048576kB 0 / 0 00:02:00.708 node0 2048kB 0 / 0 00:02:00.708 00:02:00.709 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:00.709 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:00.709 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:00.709 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:00.709 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:00.970 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:00.970 + rm -f /tmp/spdk-ld-path 00:02:00.970 + source autorun-spdk.conf 00:02:00.970 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.970 ++ SPDK_TEST_NVME=1 00:02:00.970 ++ SPDK_TEST_FTL=1 00:02:00.970 ++ SPDK_TEST_ISAL=1 00:02:00.970 ++ SPDK_RUN_ASAN=1 00:02:00.970 ++ SPDK_RUN_UBSAN=1 00:02:00.970 ++ SPDK_TEST_XNVME=1 00:02:00.970 ++ SPDK_TEST_NVME_FDP=1 00:02:00.970 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.970 ++ RUN_NIGHTLY=1 00:02:00.970 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:00.970 + [[ -n '' ]] 00:02:00.970 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:00.970 + for M in /var/spdk/build-*-manifest.txt 00:02:00.970 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:00.970 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.970 + for M in /var/spdk/build-*-manifest.txt 00:02:00.970 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:00.970 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.970 + for M in /var/spdk/build-*-manifest.txt 00:02:00.970 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:00.970 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.970 ++ uname 00:02:00.970 + [[ Linux == \L\i\n\u\x ]] 00:02:00.970 + sudo dmesg -T 00:02:00.970 + sudo dmesg --clear 00:02:00.970 + dmesg_pid=5026 00:02:00.970 + [[ Fedora Linux == FreeBSD ]] 00:02:00.970 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.970 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.970 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:00.970 + [[ -x /usr/src/fio-static/fio ]] 00:02:00.970 + sudo dmesg -Tw 00:02:00.970 + export FIO_BIN=/usr/src/fio-static/fio 00:02:00.970 + FIO_BIN=/usr/src/fio-static/fio 00:02:00.970 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:00.970 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:00.970 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:00.970 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.970 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.970 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:00.970 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.970 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.970 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:00.970 20:29:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:00.970 20:29:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.970 20:29:18 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:00.970 20:29:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:00.970 20:29:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:01.233 20:29:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:01.233 20:29:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:01.233 20:29:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:01.233 20:29:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.233 20:29:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.233 20:29:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.233 20:29:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.233 20:29:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.233 20:29:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.233 20:29:18 -- paths/export.sh@5 -- $ export PATH 00:02:01.233 20:29:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.233 20:29:18 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:01.233 20:29:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:01.233 20:29:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733516958.XXXXXX 00:02:01.233 20:29:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733516958.2SyIMV 00:02:01.233 20:29:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:01.233 20:29:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:01.233 20:29:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:01.233 20:29:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:01.233 20:29:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.233 20:29:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:01.233 20:29:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:01.233 20:29:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.233 20:29:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:01.233 20:29:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:01.233 20:29:18 -- pm/common@17 -- $ local monitor 00:02:01.233 20:29:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.233 20:29:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.233 20:29:18 -- pm/common@25 -- $ sleep 1 00:02:01.233 20:29:18 -- pm/common@21 -- $ date +%s 00:02:01.233 20:29:18 -- pm/common@21 -- $ date +%s 00:02:01.233 20:29:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733516958 00:02:01.233 20:29:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733516958 00:02:01.233 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733516958_collect-cpu-load.pm.log 00:02:01.233 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733516958_collect-vmstat.pm.log 00:02:02.177 20:29:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:02.178 20:29:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.178 20:29:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.178 20:29:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:02.178 20:29:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.178 Fri Dec 6 08:29:19 PM UTC 2024 00:02:02.178 20:29:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.178 v25.01-pre-309-g0354bb8e8 00:02:02.178 20:29:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:02.178 20:29:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:02.178 20:29:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:02.178 20:29:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:02.178 20:29:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.178 ************************************ 00:02:02.178 START TEST asan 00:02:02.178 ************************************ 00:02:02.178 using asan 00:02:02.178 20:29:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:02.178 00:02:02.178 real 0m0.000s 00:02:02.178 user 0m0.000s 00:02:02.178 sys 0m0.000s 00:02:02.178 20:29:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:02.178 ************************************ 00:02:02.178 END TEST asan 00:02:02.178 20:29:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.178 ************************************ 00:02:02.178 20:29:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.178 20:29:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.178 20:29:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:02.178 20:29:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:02.178 20:29:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.178 ************************************ 00:02:02.178 START TEST ubsan 00:02:02.178 ************************************ 00:02:02.178 using ubsan 00:02:02.178 20:29:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:02.178 00:02:02.178 real 0m0.000s 00:02:02.178 user 0m0.000s 00:02:02.178 sys 0m0.000s 00:02:02.178 20:29:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:02.178 ************************************ 00:02:02.178 END TEST ubsan 00:02:02.178 ************************************ 00:02:02.178 20:29:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.178 20:29:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:02.178 20:29:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:02.178 20:29:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:02.178 20:29:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:02.178 20:29:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:02.178 20:29:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:02.178 20:29:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:02.178 20:29:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:02.178 20:29:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:02.439 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:02.439 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.699 Using 'verbs' RDMA provider 00:02:13.705 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:23.687 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:23.687 Creating mk/config.mk...done. 00:02:23.687 Creating mk/cc.flags.mk...done. 00:02:23.687 Type 'make' to build. 00:02:23.687 20:29:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:23.687 20:29:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:23.687 20:29:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:23.687 20:29:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.687 ************************************ 00:02:23.687 START TEST make 00:02:23.687 ************************************ 00:02:23.687 20:29:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:23.687 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:23.687 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:23.687 meson setup builddir \ 00:02:23.687 -Dwith-libaio=enabled \ 00:02:23.687 -Dwith-liburing=enabled \ 00:02:23.687 -Dwith-libvfn=disabled \ 00:02:23.687 -Dwith-spdk=disabled \ 00:02:23.687 -Dexamples=false \ 00:02:23.687 -Dtests=false \ 00:02:23.687 -Dtools=false && \ 00:02:23.687 meson compile -C builddir && \ 00:02:23.687 cd -) 00:02:23.687 make[1]: Nothing to be done for 'all'. 00:02:25.602 The Meson build system 00:02:25.602 Version: 1.5.0 00:02:25.602 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:25.602 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:25.602 Build type: native build 00:02:25.602 Project name: xnvme 00:02:25.602 Project version: 0.7.5 00:02:25.602 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.602 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.602 Host machine cpu family: x86_64 00:02:25.602 Host machine cpu: x86_64 00:02:25.602 Message: host_machine.system: linux 00:02:25.602 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:25.602 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:25.602 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:25.602 Run-time dependency threads found: YES 00:02:25.602 Has header "setupapi.h" : NO 00:02:25.602 Has header "linux/blkzoned.h" : YES 00:02:25.602 Has header "linux/blkzoned.h" : YES (cached) 00:02:25.602 Has header "libaio.h" : YES 00:02:25.602 Library aio found: YES 00:02:25.602 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.602 Run-time dependency liburing found: YES 2.2 00:02:25.602 Dependency libvfn skipped: feature with-libvfn disabled 00:02:25.602 Found CMake: /usr/bin/cmake (3.27.7) 00:02:25.602 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:25.602 Subproject spdk : skipped: feature with-spdk disabled 00:02:25.602 Run-time dependency appleframeworks found: NO (tried framework) 00:02:25.602 Run-time dependency appleframeworks found: NO (tried framework) 00:02:25.602 Library rt found: YES 00:02:25.602 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:25.602 Configuring xnvme_config.h using configuration 00:02:25.602 Configuring xnvme.spec using configuration 00:02:25.602 Run-time dependency bash-completion found: YES 2.11 00:02:25.602 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:25.602 Program cp found: YES (/usr/bin/cp) 00:02:25.602 Build targets in project: 3 00:02:25.602 00:02:25.602 xnvme 0.7.5 00:02:25.602 00:02:25.602 Subprojects 00:02:25.602 spdk : NO Feature 'with-spdk' disabled 00:02:25.602 00:02:25.602 User defined options 00:02:25.602 examples : false 00:02:25.602 tests : false 00:02:25.602 tools : false 00:02:25.602 with-libaio : enabled 00:02:25.602 with-liburing: enabled 00:02:25.602 with-libvfn : disabled 00:02:25.602 with-spdk : disabled 00:02:25.602 00:02:25.602 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.863 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:25.863 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:25.864 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:25.864 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:25.864 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:25.864 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:25.864 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:25.864 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:25.864 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:26.125 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:26.125 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:26.125 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:26.125 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:26.125 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:26.125 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:26.125 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:26.125 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:26.125 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:26.125 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:26.125 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:26.125 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:26.125 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:26.125 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:26.125 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:26.125 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:26.125 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:26.125 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:26.125 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:26.126 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:26.126 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:26.126 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:26.126 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:26.126 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:26.126 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:26.126 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:26.126 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:26.126 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:26.126 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:26.126 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:26.126 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:26.126 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:26.126 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:26.126 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:26.126 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:26.126 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:26.126 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:26.126 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:26.126 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:26.386 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:26.386 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:26.386 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:26.386 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:26.386 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:26.386 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:26.386 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:26.386 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:26.386 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:26.386 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:26.386 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:26.386 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:26.386 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:26.386 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:26.386 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:26.386 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:26.386 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:26.386 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:26.386 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:26.386 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:26.386 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:26.645 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:26.645 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:26.645 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:26.645 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:26.645 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:26.903 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:26.903 [75/76] Linking static target lib/libxnvme.a 00:02:26.903 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:26.903 INFO: autodetecting backend as ninja 00:02:26.903 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:26.903 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:33.467 The Meson build system 00:02:33.467 Version: 1.5.0 00:02:33.467 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:33.467 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:33.467 Build type: native build 00:02:33.467 Program cat found: YES (/usr/bin/cat) 00:02:33.467 Project name: DPDK 00:02:33.467 Project version: 24.03.0 00:02:33.467 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.467 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.467 Host machine cpu family: x86_64 00:02:33.467 Host machine cpu: x86_64 00:02:33.467 Message: ## Building in Developer Mode ## 00:02:33.467 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.467 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:33.467 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.467 Program python3 found: YES (/usr/bin/python3) 00:02:33.467 Program cat found: YES (/usr/bin/cat) 00:02:33.467 Compiler for C supports arguments -march=native: YES 00:02:33.467 Checking for size of "void *" : 8 00:02:33.467 Checking for size of "void *" : 8 (cached) 00:02:33.467 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:33.467 Library m found: YES 00:02:33.467 Library numa found: YES 00:02:33.467 Has header "numaif.h" : YES 00:02:33.467 Library fdt found: NO 00:02:33.467 Library execinfo found: NO 00:02:33.467 Has header "execinfo.h" : YES 00:02:33.467 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.467 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.467 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.467 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.467 Run-time dependency openssl found: YES 3.1.1 00:02:33.467 Run-time dependency libpcap found: YES 1.10.4 00:02:33.467 Has header "pcap.h" with dependency libpcap: YES 00:02:33.467 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.467 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.467 Compiler for C supports arguments -Wformat: YES 00:02:33.467 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.467 Compiler for C supports arguments -Wformat-security: NO 00:02:33.467 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.467 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.467 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.467 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.467 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.467 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.467 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.467 Compiler for C supports arguments -Wundef: YES 00:02:33.467 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.467 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.467 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.467 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.467 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.467 Program objdump found: YES (/usr/bin/objdump) 00:02:33.467 Compiler for C supports arguments -mavx512f: YES 00:02:33.467 Checking if "AVX512 checking" compiles: YES 00:02:33.467 Fetching value of define "__SSE4_2__" : 1 00:02:33.467 Fetching value of define "__AES__" : 1 00:02:33.467 Fetching value of define "__AVX__" : 1 00:02:33.468 Fetching value of define "__AVX2__" : 1 00:02:33.468 Fetching value of define "__AVX512BW__" : 1 00:02:33.468 Fetching value of define "__AVX512CD__" : 1 00:02:33.468 Fetching value of define "__AVX512DQ__" : 1 00:02:33.468 Fetching value of define "__AVX512F__" : 1 00:02:33.468 Fetching value of define "__AVX512VL__" : 1 00:02:33.468 Fetching value of define "__PCLMUL__" : 1 00:02:33.468 Fetching value of define "__RDRND__" : 1 00:02:33.468 Fetching value of define "__RDSEED__" : 1 00:02:33.468 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:33.468 Fetching value of define "__znver1__" : (undefined) 00:02:33.468 Fetching value of define "__znver2__" : (undefined) 00:02:33.468 Fetching value of define "__znver3__" : (undefined) 00:02:33.468 Fetching value of define "__znver4__" : (undefined) 00:02:33.468 Library asan found: YES 00:02:33.468 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.468 Message: lib/log: Defining dependency "log" 00:02:33.468 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.468 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.468 Library rt found: YES 00:02:33.468 Checking for function "getentropy" : NO 00:02:33.468 Message: lib/eal: Defining dependency "eal" 00:02:33.468 Message: lib/ring: Defining dependency "ring" 00:02:33.468 Message: lib/rcu: Defining dependency "rcu" 00:02:33.468 Message: lib/mempool: Defining dependency "mempool" 00:02:33.468 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.468 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.468 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.468 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:33.468 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:33.468 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:33.468 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:33.468 Compiler for C supports arguments -mpclmul: YES 00:02:33.468 Compiler for C supports arguments -maes: YES 00:02:33.468 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.468 Compiler for C supports arguments -mavx512bw: YES 00:02:33.468 Compiler for C supports arguments -mavx512dq: YES 00:02:33.468 Compiler for C supports arguments -mavx512vl: YES 00:02:33.468 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.468 Compiler for C supports arguments -mavx2: YES 00:02:33.468 Compiler for C supports arguments -mavx: YES 00:02:33.468 Message: lib/net: Defining dependency "net" 00:02:33.468 Message: lib/meter: Defining dependency "meter" 00:02:33.468 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.468 Message: lib/pci: Defining dependency "pci" 00:02:33.468 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.468 Message: lib/hash: Defining dependency "hash" 00:02:33.468 Message: lib/timer: Defining dependency "timer" 00:02:33.468 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.468 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.468 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.468 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.468 Message: lib/power: Defining dependency "power" 00:02:33.468 Message: lib/reorder: Defining dependency "reorder" 00:02:33.468 Message: lib/security: Defining dependency "security" 00:02:33.468 Has header "linux/userfaultfd.h" : YES 00:02:33.468 Has header "linux/vduse.h" : YES 00:02:33.468 Message: lib/vhost: Defining dependency "vhost" 00:02:33.468 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.468 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.468 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:33.468 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:33.468 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:33.468 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:33.468 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:33.468 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:33.468 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:33.468 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:33.468 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:33.468 Configuring doxy-api-html.conf using configuration 00:02:33.468 Configuring doxy-api-man.conf using configuration 00:02:33.468 Program mandb found: YES (/usr/bin/mandb) 00:02:33.468 Program sphinx-build found: NO 00:02:33.468 Configuring rte_build_config.h using configuration 00:02:33.468 Message: 00:02:33.468 ================= 00:02:33.468 Applications Enabled 00:02:33.468 ================= 00:02:33.468 00:02:33.468 apps: 00:02:33.468 00:02:33.468 00:02:33.468 Message: 00:02:33.468 ================= 00:02:33.468 Libraries Enabled 00:02:33.468 ================= 00:02:33.468 00:02:33.468 libs: 00:02:33.468 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:33.468 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:33.468 cryptodev, dmadev, power, reorder, security, vhost, 00:02:33.468 00:02:33.468 Message: 00:02:33.468 =============== 00:02:33.468 Drivers Enabled 00:02:33.468 =============== 00:02:33.468 00:02:33.468 common: 00:02:33.468 00:02:33.468 bus: 00:02:33.468 pci, vdev, 00:02:33.468 mempool: 00:02:33.468 ring, 00:02:33.468 dma: 00:02:33.468 00:02:33.468 net: 00:02:33.468 00:02:33.468 crypto: 00:02:33.468 00:02:33.468 compress: 00:02:33.468 00:02:33.468 vdpa: 00:02:33.468 00:02:33.468 00:02:33.468 Message: 00:02:33.468 ================= 00:02:33.468 Content Skipped 00:02:33.468 ================= 00:02:33.468 00:02:33.468 apps: 00:02:33.468 dumpcap: explicitly disabled via build config 00:02:33.468 graph: explicitly disabled via build config 00:02:33.468 pdump: explicitly disabled via build config 00:02:33.468 proc-info: explicitly disabled via build config 00:02:33.468 test-acl: explicitly disabled via build config 00:02:33.468 test-bbdev: explicitly disabled via build config 00:02:33.468 test-cmdline: explicitly disabled via build config 00:02:33.468 test-compress-perf: explicitly disabled via build config 00:02:33.468 test-crypto-perf: explicitly disabled via build config 00:02:33.468 test-dma-perf: explicitly disabled via build config 00:02:33.468 test-eventdev: explicitly disabled via build config 00:02:33.468 test-fib: explicitly disabled via build config 00:02:33.468 test-flow-perf: explicitly disabled via build config 00:02:33.468 test-gpudev: explicitly disabled via build config 00:02:33.468 test-mldev: explicitly disabled via build config 00:02:33.468 test-pipeline: explicitly disabled via build config 00:02:33.468 test-pmd: explicitly disabled via build config 00:02:33.468 test-regex: explicitly disabled via build config 00:02:33.468 test-sad: explicitly disabled via build config 00:02:33.468 test-security-perf: explicitly disabled via build config 00:02:33.468 00:02:33.468 libs: 00:02:33.468 argparse: explicitly disabled via build config 00:02:33.468 metrics: explicitly disabled via build config 00:02:33.468 acl: explicitly disabled via build config 00:02:33.468 bbdev: explicitly disabled via build config 00:02:33.468 bitratestats: explicitly disabled via build config 00:02:33.468 bpf: explicitly disabled via build config 00:02:33.468 cfgfile: explicitly disabled via build config 00:02:33.468 distributor: explicitly disabled via build config 00:02:33.468 efd: explicitly disabled via build config 00:02:33.468 eventdev: explicitly disabled via build config 00:02:33.468 dispatcher: explicitly disabled via build config 00:02:33.468 gpudev: explicitly disabled via build config 00:02:33.468 gro: explicitly disabled via build config 00:02:33.468 gso: explicitly disabled via build config 00:02:33.468 ip_frag: explicitly disabled via build config 00:02:33.468 jobstats: explicitly disabled via build config 00:02:33.468 latencystats: explicitly disabled via build config 00:02:33.468 lpm: explicitly disabled via build config 00:02:33.468 member: explicitly disabled via build config 00:02:33.468 pcapng: explicitly disabled via build config 00:02:33.468 rawdev: explicitly disabled via build config 00:02:33.468 regexdev: explicitly disabled via build config 00:02:33.468 mldev: explicitly disabled via build config 00:02:33.468 rib: explicitly disabled via build config 00:02:33.468 sched: explicitly disabled via build config 00:02:33.468 stack: explicitly disabled via build config 00:02:33.468 ipsec: explicitly disabled via build config 00:02:33.468 pdcp: explicitly disabled via build config 00:02:33.468 fib: explicitly disabled via build config 00:02:33.468 port: explicitly disabled via build config 00:02:33.468 pdump: explicitly disabled via build config 00:02:33.468 table: explicitly disabled via build config 00:02:33.468 pipeline: explicitly disabled via build config 00:02:33.468 graph: explicitly disabled via build config 00:02:33.468 node: explicitly disabled via build config 00:02:33.468 00:02:33.468 drivers: 00:02:33.468 common/cpt: not in enabled drivers build config 00:02:33.468 common/dpaax: not in enabled drivers build config 00:02:33.468 common/iavf: not in enabled drivers build config 00:02:33.468 common/idpf: not in enabled drivers build config 00:02:33.468 common/ionic: not in enabled drivers build config 00:02:33.468 common/mvep: not in enabled drivers build config 00:02:33.468 common/octeontx: not in enabled drivers build config 00:02:33.468 bus/auxiliary: not in enabled drivers build config 00:02:33.468 bus/cdx: not in enabled drivers build config 00:02:33.468 bus/dpaa: not in enabled drivers build config 00:02:33.468 bus/fslmc: not in enabled drivers build config 00:02:33.468 bus/ifpga: not in enabled drivers build config 00:02:33.468 bus/platform: not in enabled drivers build config 00:02:33.468 bus/uacce: not in enabled drivers build config 00:02:33.468 bus/vmbus: not in enabled drivers build config 00:02:33.468 common/cnxk: not in enabled drivers build config 00:02:33.468 common/mlx5: not in enabled drivers build config 00:02:33.468 common/nfp: not in enabled drivers build config 00:02:33.468 common/nitrox: not in enabled drivers build config 00:02:33.468 common/qat: not in enabled drivers build config 00:02:33.468 common/sfc_efx: not in enabled drivers build config 00:02:33.468 mempool/bucket: not in enabled drivers build config 00:02:33.468 mempool/cnxk: not in enabled drivers build config 00:02:33.468 mempool/dpaa: not in enabled drivers build config 00:02:33.468 mempool/dpaa2: not in enabled drivers build config 00:02:33.469 mempool/octeontx: not in enabled drivers build config 00:02:33.469 mempool/stack: not in enabled drivers build config 00:02:33.469 dma/cnxk: not in enabled drivers build config 00:02:33.469 dma/dpaa: not in enabled drivers build config 00:02:33.469 dma/dpaa2: not in enabled drivers build config 00:02:33.469 dma/hisilicon: not in enabled drivers build config 00:02:33.469 dma/idxd: not in enabled drivers build config 00:02:33.469 dma/ioat: not in enabled drivers build config 00:02:33.469 dma/skeleton: not in enabled drivers build config 00:02:33.469 net/af_packet: not in enabled drivers build config 00:02:33.469 net/af_xdp: not in enabled drivers build config 00:02:33.469 net/ark: not in enabled drivers build config 00:02:33.469 net/atlantic: not in enabled drivers build config 00:02:33.469 net/avp: not in enabled drivers build config 00:02:33.469 net/axgbe: not in enabled drivers build config 00:02:33.469 net/bnx2x: not in enabled drivers build config 00:02:33.469 net/bnxt: not in enabled drivers build config 00:02:33.469 net/bonding: not in enabled drivers build config 00:02:33.469 net/cnxk: not in enabled drivers build config 00:02:33.469 net/cpfl: not in enabled drivers build config 00:02:33.469 net/cxgbe: not in enabled drivers build config 00:02:33.469 net/dpaa: not in enabled drivers build config 00:02:33.469 net/dpaa2: not in enabled drivers build config 00:02:33.469 net/e1000: not in enabled drivers build config 00:02:33.469 net/ena: not in enabled drivers build config 00:02:33.469 net/enetc: not in enabled drivers build config 00:02:33.469 net/enetfec: not in enabled drivers build config 00:02:33.469 net/enic: not in enabled drivers build config 00:02:33.469 net/failsafe: not in enabled drivers build config 00:02:33.469 net/fm10k: not in enabled drivers build config 00:02:33.469 net/gve: not in enabled drivers build config 00:02:33.469 net/hinic: not in enabled drivers build config 00:02:33.469 net/hns3: not in enabled drivers build config 00:02:33.469 net/i40e: not in enabled drivers build config 00:02:33.469 net/iavf: not in enabled drivers build config 00:02:33.469 net/ice: not in enabled drivers build config 00:02:33.469 net/idpf: not in enabled drivers build config 00:02:33.469 net/igc: not in enabled drivers build config 00:02:33.469 net/ionic: not in enabled drivers build config 00:02:33.469 net/ipn3ke: not in enabled drivers build config 00:02:33.469 net/ixgbe: not in enabled drivers build config 00:02:33.469 net/mana: not in enabled drivers build config 00:02:33.469 net/memif: not in enabled drivers build config 00:02:33.469 net/mlx4: not in enabled drivers build config 00:02:33.469 net/mlx5: not in enabled drivers build config 00:02:33.469 net/mvneta: not in enabled drivers build config 00:02:33.469 net/mvpp2: not in enabled drivers build config 00:02:33.469 net/netvsc: not in enabled drivers build config 00:02:33.469 net/nfb: not in enabled drivers build config 00:02:33.469 net/nfp: not in enabled drivers build config 00:02:33.469 net/ngbe: not in enabled drivers build config 00:02:33.469 net/null: not in enabled drivers build config 00:02:33.469 net/octeontx: not in enabled drivers build config 00:02:33.469 net/octeon_ep: not in enabled drivers build config 00:02:33.469 net/pcap: not in enabled drivers build config 00:02:33.469 net/pfe: not in enabled drivers build config 00:02:33.469 net/qede: not in enabled drivers build config 00:02:33.469 net/ring: not in enabled drivers build config 00:02:33.469 net/sfc: not in enabled drivers build config 00:02:33.469 net/softnic: not in enabled drivers build config 00:02:33.469 net/tap: not in enabled drivers build config 00:02:33.469 net/thunderx: not in enabled drivers build config 00:02:33.469 net/txgbe: not in enabled drivers build config 00:02:33.469 net/vdev_netvsc: not in enabled drivers build config 00:02:33.469 net/vhost: not in enabled drivers build config 00:02:33.469 net/virtio: not in enabled drivers build config 00:02:33.469 net/vmxnet3: not in enabled drivers build config 00:02:33.469 raw/*: missing internal dependency, "rawdev" 00:02:33.469 crypto/armv8: not in enabled drivers build config 00:02:33.469 crypto/bcmfs: not in enabled drivers build config 00:02:33.469 crypto/caam_jr: not in enabled drivers build config 00:02:33.469 crypto/ccp: not in enabled drivers build config 00:02:33.469 crypto/cnxk: not in enabled drivers build config 00:02:33.469 crypto/dpaa_sec: not in enabled drivers build config 00:02:33.469 crypto/dpaa2_sec: not in enabled drivers build config 00:02:33.469 crypto/ipsec_mb: not in enabled drivers build config 00:02:33.469 crypto/mlx5: not in enabled drivers build config 00:02:33.469 crypto/mvsam: not in enabled drivers build config 00:02:33.469 crypto/nitrox: not in enabled drivers build config 00:02:33.469 crypto/null: not in enabled drivers build config 00:02:33.469 crypto/octeontx: not in enabled drivers build config 00:02:33.469 crypto/openssl: not in enabled drivers build config 00:02:33.469 crypto/scheduler: not in enabled drivers build config 00:02:33.469 crypto/uadk: not in enabled drivers build config 00:02:33.469 crypto/virtio: not in enabled drivers build config 00:02:33.469 compress/isal: not in enabled drivers build config 00:02:33.469 compress/mlx5: not in enabled drivers build config 00:02:33.469 compress/nitrox: not in enabled drivers build config 00:02:33.469 compress/octeontx: not in enabled drivers build config 00:02:33.469 compress/zlib: not in enabled drivers build config 00:02:33.469 regex/*: missing internal dependency, "regexdev" 00:02:33.469 ml/*: missing internal dependency, "mldev" 00:02:33.469 vdpa/ifc: not in enabled drivers build config 00:02:33.469 vdpa/mlx5: not in enabled drivers build config 00:02:33.469 vdpa/nfp: not in enabled drivers build config 00:02:33.469 vdpa/sfc: not in enabled drivers build config 00:02:33.469 event/*: missing internal dependency, "eventdev" 00:02:33.469 baseband/*: missing internal dependency, "bbdev" 00:02:33.469 gpu/*: missing internal dependency, "gpudev" 00:02:33.469 00:02:33.469 00:02:33.469 Build targets in project: 84 00:02:33.469 00:02:33.469 DPDK 24.03.0 00:02:33.469 00:02:33.469 User defined options 00:02:33.469 buildtype : debug 00:02:33.469 default_library : shared 00:02:33.469 libdir : lib 00:02:33.469 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:33.469 b_sanitize : address 00:02:33.469 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:33.469 c_link_args : 00:02:33.469 cpu_instruction_set: native 00:02:33.469 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:33.469 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:33.469 enable_docs : false 00:02:33.469 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:33.469 enable_kmods : false 00:02:33.469 max_lcores : 128 00:02:33.469 tests : false 00:02:33.469 00:02:33.469 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.469 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:33.469 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.469 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.469 [3/267] Linking static target lib/librte_kvargs.a 00:02:33.469 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.469 [5/267] Linking static target lib/librte_log.a 00:02:33.469 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:33.469 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.469 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:33.469 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.469 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.469 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.469 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.731 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:33.731 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.731 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.731 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.731 [17/267] Linking static target lib/librte_telemetry.a 00:02:33.731 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.993 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.993 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.993 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.993 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.993 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.993 [24/267] Linking target lib/librte_log.so.24.1 00:02:33.993 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.993 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:34.253 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:34.253 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.253 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:34.253 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.253 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.253 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.253 [33/267] Linking target lib/librte_kvargs.so.24.1 00:02:34.253 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.512 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.512 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:34.512 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.512 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:34.512 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.512 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.512 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.512 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.512 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:34.512 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.512 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:34.512 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:34.771 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:34.771 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:34.771 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:34.771 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.771 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.031 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.031 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.031 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.031 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.031 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.031 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.289 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.289 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.290 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.290 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.290 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:35.290 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.290 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.290 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.290 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.548 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.549 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.549 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.549 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.549 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.807 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:35.807 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:35.807 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:35.807 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:35.807 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:35.807 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:35.807 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.807 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.807 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:36.067 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:36.067 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:36.067 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:36.067 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:36.067 [85/267] Linking static target lib/librte_eal.a 00:02:36.067 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:36.067 [87/267] Linking static target lib/librte_ring.a 00:02:36.325 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:36.325 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:36.325 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.325 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:36.325 [92/267] Linking static target lib/librte_mempool.a 00:02:36.325 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:36.325 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:36.584 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.585 [96/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.844 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:36.844 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.844 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:36.844 [100/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:36.844 [101/267] Linking static target lib/librte_rcu.a 00:02:36.844 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.844 [103/267] Linking static target lib/librte_mbuf.a 00:02:36.844 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.844 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:37.102 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.102 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.102 [108/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.102 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.102 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.102 [111/267] Linking static target lib/librte_meter.a 00:02:37.102 [112/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:37.102 [113/267] Linking static target lib/librte_net.a 00:02:37.102 [114/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.361 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:37.361 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.361 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:37.361 [118/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.620 [119/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.620 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:37.620 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:37.620 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:37.879 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:37.879 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:37.879 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:37.879 [126/267] Linking static target lib/librte_pci.a 00:02:37.879 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:37.879 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.139 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.139 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.139 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.139 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.139 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.139 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:38.139 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:38.139 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.139 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:38.139 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.139 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:38.139 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:38.139 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:38.139 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:38.139 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:38.139 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:38.395 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:38.395 [146/267] Linking static target lib/librte_cmdline.a 00:02:38.395 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:38.395 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:38.395 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:38.652 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:38.652 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:38.909 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:38.909 [153/267] Linking static target lib/librte_compressdev.a 00:02:38.909 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:38.909 [155/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:38.909 [156/267] Linking static target lib/librte_timer.a 00:02:38.909 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:38.909 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.909 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.168 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:39.168 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.168 [162/267] Linking static target lib/librte_hash.a 00:02:39.168 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.168 [164/267] Linking static target lib/librte_dmadev.a 00:02:39.169 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.429 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.429 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.429 [168/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.429 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:39.429 [170/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.429 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.740 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.740 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.740 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.740 [175/267] Linking static target lib/librte_ethdev.a 00:02:39.740 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.740 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.740 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.019 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.019 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:40.019 [181/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.019 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:40.019 [183/267] Linking static target lib/librte_cryptodev.a 00:02:40.278 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.278 [185/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.278 [186/267] Linking static target lib/librte_power.a 00:02:40.278 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.278 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.278 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:40.278 [190/267] Linking static target lib/librte_reorder.a 00:02:40.278 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:40.278 [192/267] Linking static target lib/librte_security.a 00:02:40.538 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.797 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.797 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.797 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.055 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.055 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:41.055 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.055 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.314 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.314 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.314 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.314 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.314 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.573 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.573 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.573 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.573 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.573 [210/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.573 [211/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.573 [212/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.573 [213/267] Linking static target drivers/librte_bus_pci.a 00:02:41.831 [214/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.831 [215/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.831 [216/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.831 [217/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.831 [218/267] Linking static target drivers/librte_bus_vdev.a 00:02:41.831 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.831 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.831 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.090 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.090 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.090 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:42.090 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.090 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.348 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:43.282 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.282 [229/267] Linking target lib/librte_eal.so.24.1 00:02:43.540 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:43.540 [231/267] Linking target lib/librte_meter.so.24.1 00:02:43.540 [232/267] Linking target lib/librte_pci.so.24.1 00:02:43.540 [233/267] Linking target lib/librte_dmadev.so.24.1 00:02:43.540 [234/267] Linking target lib/librte_ring.so.24.1 00:02:43.540 [235/267] Linking target lib/librte_timer.so.24.1 00:02:43.540 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:43.540 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:43.540 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:43.798 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:43.798 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:43.798 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:43.798 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:43.798 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:43.798 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:43.798 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:43.798 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:43.798 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:43.798 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:44.056 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:44.056 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:44.056 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:44.056 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:44.056 [253/267] Linking target lib/librte_net.so.24.1 00:02:44.056 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:44.056 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:44.056 [256/267] Linking target lib/librte_security.so.24.1 00:02:44.056 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:44.056 [258/267] Linking target lib/librte_hash.so.24.1 00:02:44.314 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:44.880 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.880 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:44.880 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.880 [263/267] Linking target lib/librte_power.so.24.1 00:02:45.446 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.446 [265/267] Linking static target lib/librte_vhost.a 00:02:46.381 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.639 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:46.639 INFO: autodetecting backend as ninja 00:02:46.639 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:01.503 CC lib/ut_mock/mock.o 00:03:01.503 CC lib/ut/ut.o 00:03:01.503 CC lib/log/log_flags.o 00:03:01.503 CC lib/log/log.o 00:03:01.503 CC lib/log/log_deprecated.o 00:03:01.503 LIB libspdk_ut.a 00:03:01.503 LIB libspdk_ut_mock.a 00:03:01.503 SO libspdk_ut.so.2.0 00:03:01.503 LIB libspdk_log.a 00:03:01.503 SO libspdk_ut_mock.so.6.0 00:03:01.503 SYMLINK libspdk_ut.so 00:03:01.503 SO libspdk_log.so.7.1 00:03:01.503 SYMLINK libspdk_ut_mock.so 00:03:01.503 SYMLINK libspdk_log.so 00:03:01.503 CXX lib/trace_parser/trace.o 00:03:01.503 CC lib/ioat/ioat.o 00:03:01.503 CC lib/dma/dma.o 00:03:01.503 CC lib/util/base64.o 00:03:01.503 CC lib/util/crc16.o 00:03:01.503 CC lib/util/cpuset.o 00:03:01.503 CC lib/util/crc32.o 00:03:01.503 CC lib/util/bit_array.o 00:03:01.503 CC lib/util/crc32c.o 00:03:01.503 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.503 CC lib/util/crc32_ieee.o 00:03:01.503 CC lib/util/crc64.o 00:03:01.503 CC lib/vfio_user/host/vfio_user.o 00:03:01.503 CC lib/util/dif.o 00:03:01.503 LIB libspdk_dma.a 00:03:01.503 CC lib/util/fd.o 00:03:01.503 CC lib/util/fd_group.o 00:03:01.503 SO libspdk_dma.so.5.0 00:03:01.503 CC lib/util/file.o 00:03:01.503 CC lib/util/hexlify.o 00:03:01.503 SYMLINK libspdk_dma.so 00:03:01.503 LIB libspdk_ioat.a 00:03:01.503 CC lib/util/iov.o 00:03:01.503 CC lib/util/math.o 00:03:01.503 SO libspdk_ioat.so.7.0 00:03:01.503 CC lib/util/net.o 00:03:01.503 LIB libspdk_vfio_user.a 00:03:01.503 SO libspdk_vfio_user.so.5.0 00:03:01.503 SYMLINK libspdk_ioat.so 00:03:01.503 CC lib/util/pipe.o 00:03:01.503 CC lib/util/strerror_tls.o 00:03:01.503 CC lib/util/string.o 00:03:01.503 CC lib/util/uuid.o 00:03:01.503 SYMLINK libspdk_vfio_user.so 00:03:01.761 CC lib/util/xor.o 00:03:01.761 CC lib/util/zipf.o 00:03:01.761 CC lib/util/md5.o 00:03:02.020 LIB libspdk_util.a 00:03:02.020 SO libspdk_util.so.10.1 00:03:02.020 LIB libspdk_trace_parser.a 00:03:02.278 SO libspdk_trace_parser.so.6.0 00:03:02.278 SYMLINK libspdk_util.so 00:03:02.278 SYMLINK libspdk_trace_parser.so 00:03:02.278 CC lib/idxd/idxd.o 00:03:02.278 CC lib/conf/conf.o 00:03:02.278 CC lib/idxd/idxd_kernel.o 00:03:02.278 CC lib/idxd/idxd_user.o 00:03:02.278 CC lib/vmd/vmd.o 00:03:02.278 CC lib/vmd/led.o 00:03:02.278 CC lib/json/json_parse.o 00:03:02.278 CC lib/json/json_util.o 00:03:02.278 CC lib/env_dpdk/env.o 00:03:02.278 CC lib/rdma_utils/rdma_utils.o 00:03:02.537 CC lib/json/json_write.o 00:03:02.537 CC lib/env_dpdk/memory.o 00:03:02.537 LIB libspdk_conf.a 00:03:02.537 CC lib/env_dpdk/pci.o 00:03:02.537 SO libspdk_conf.so.6.0 00:03:02.537 CC lib/env_dpdk/init.o 00:03:02.537 CC lib/env_dpdk/threads.o 00:03:02.537 SYMLINK libspdk_conf.so 00:03:02.537 LIB libspdk_rdma_utils.a 00:03:02.537 CC lib/env_dpdk/pci_ioat.o 00:03:02.537 SO libspdk_rdma_utils.so.1.0 00:03:02.795 SYMLINK libspdk_rdma_utils.so 00:03:02.795 CC lib/env_dpdk/pci_virtio.o 00:03:02.795 CC lib/env_dpdk/pci_vmd.o 00:03:02.795 LIB libspdk_json.a 00:03:02.795 SO libspdk_json.so.6.0 00:03:02.795 CC lib/env_dpdk/pci_idxd.o 00:03:02.795 LIB libspdk_idxd.a 00:03:02.795 SYMLINK libspdk_json.so 00:03:02.795 CC lib/env_dpdk/pci_event.o 00:03:02.795 CC lib/rdma_provider/common.o 00:03:02.795 SO libspdk_idxd.so.12.1 00:03:03.053 CC lib/env_dpdk/sigbus_handler.o 00:03:03.053 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.053 CC lib/env_dpdk/pci_dpdk.o 00:03:03.053 SYMLINK libspdk_idxd.so 00:03:03.053 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.053 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.053 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.054 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.054 LIB libspdk_vmd.a 00:03:03.054 SO libspdk_vmd.so.6.0 00:03:03.054 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.054 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.054 SYMLINK libspdk_vmd.so 00:03:03.054 LIB libspdk_rdma_provider.a 00:03:03.054 SO libspdk_rdma_provider.so.7.0 00:03:03.312 SYMLINK libspdk_rdma_provider.so 00:03:03.312 LIB libspdk_jsonrpc.a 00:03:03.312 SO libspdk_jsonrpc.so.6.0 00:03:03.312 SYMLINK libspdk_jsonrpc.so 00:03:03.570 CC lib/rpc/rpc.o 00:03:03.827 LIB libspdk_env_dpdk.a 00:03:03.827 LIB libspdk_rpc.a 00:03:03.827 SO libspdk_env_dpdk.so.15.1 00:03:03.827 SO libspdk_rpc.so.6.0 00:03:03.827 SYMLINK libspdk_rpc.so 00:03:03.827 SYMLINK libspdk_env_dpdk.so 00:03:04.085 CC lib/notify/notify.o 00:03:04.085 CC lib/notify/notify_rpc.o 00:03:04.085 CC lib/keyring/keyring_rpc.o 00:03:04.085 CC lib/keyring/keyring.o 00:03:04.085 CC lib/trace/trace_rpc.o 00:03:04.085 CC lib/trace/trace.o 00:03:04.085 CC lib/trace/trace_flags.o 00:03:04.085 LIB libspdk_notify.a 00:03:04.085 SO libspdk_notify.so.6.0 00:03:04.347 LIB libspdk_keyring.a 00:03:04.347 SYMLINK libspdk_notify.so 00:03:04.347 SO libspdk_keyring.so.2.0 00:03:04.347 LIB libspdk_trace.a 00:03:04.347 SYMLINK libspdk_keyring.so 00:03:04.347 SO libspdk_trace.so.11.0 00:03:04.347 SYMLINK libspdk_trace.so 00:03:04.604 CC lib/thread/thread.o 00:03:04.604 CC lib/thread/iobuf.o 00:03:04.604 CC lib/sock/sock.o 00:03:04.604 CC lib/sock/sock_rpc.o 00:03:04.861 LIB libspdk_sock.a 00:03:05.120 SO libspdk_sock.so.10.0 00:03:05.120 SYMLINK libspdk_sock.so 00:03:05.379 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.379 CC lib/nvme/nvme_ns_cmd.o 00:03:05.379 CC lib/nvme/nvme_ctrlr.o 00:03:05.379 CC lib/nvme/nvme_fabric.o 00:03:05.379 CC lib/nvme/nvme_pcie.o 00:03:05.379 CC lib/nvme/nvme.o 00:03:05.379 CC lib/nvme/nvme_ns.o 00:03:05.379 CC lib/nvme/nvme_qpair.o 00:03:05.379 CC lib/nvme/nvme_pcie_common.o 00:03:05.640 CC lib/nvme/nvme_quirks.o 00:03:05.901 CC lib/nvme/nvme_transport.o 00:03:05.901 CC lib/nvme/nvme_discovery.o 00:03:05.901 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.162 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.162 LIB libspdk_thread.a 00:03:06.162 CC lib/nvme/nvme_tcp.o 00:03:06.162 CC lib/nvme/nvme_opal.o 00:03:06.162 CC lib/nvme/nvme_io_msg.o 00:03:06.162 SO libspdk_thread.so.11.0 00:03:06.162 SYMLINK libspdk_thread.so 00:03:06.162 CC lib/nvme/nvme_poll_group.o 00:03:06.423 CC lib/nvme/nvme_zns.o 00:03:06.423 CC lib/nvme/nvme_stubs.o 00:03:06.684 CC lib/accel/accel.o 00:03:06.684 CC lib/init/json_config.o 00:03:06.684 CC lib/nvme/nvme_auth.o 00:03:06.684 CC lib/blob/blobstore.o 00:03:06.684 CC lib/virtio/virtio.o 00:03:06.684 CC lib/blob/request.o 00:03:06.684 CC lib/virtio/virtio_vhost_user.o 00:03:06.946 CC lib/init/subsystem.o 00:03:06.946 CC lib/fsdev/fsdev.o 00:03:06.946 CC lib/blob/zeroes.o 00:03:06.946 CC lib/init/subsystem_rpc.o 00:03:06.946 CC lib/init/rpc.o 00:03:06.946 CC lib/accel/accel_rpc.o 00:03:06.946 CC lib/accel/accel_sw.o 00:03:07.206 CC lib/virtio/virtio_vfio_user.o 00:03:07.206 LIB libspdk_init.a 00:03:07.206 SO libspdk_init.so.6.0 00:03:07.206 CC lib/nvme/nvme_cuse.o 00:03:07.206 SYMLINK libspdk_init.so 00:03:07.206 CC lib/blob/blob_bs_dev.o 00:03:07.206 CC lib/virtio/virtio_pci.o 00:03:07.464 CC lib/fsdev/fsdev_io.o 00:03:07.464 CC lib/nvme/nvme_rdma.o 00:03:07.464 CC lib/event/app.o 00:03:07.464 CC lib/fsdev/fsdev_rpc.o 00:03:07.464 LIB libspdk_virtio.a 00:03:07.464 CC lib/event/reactor.o 00:03:07.464 CC lib/event/log_rpc.o 00:03:07.464 SO libspdk_virtio.so.7.0 00:03:07.464 CC lib/event/app_rpc.o 00:03:07.464 SYMLINK libspdk_virtio.so 00:03:07.464 CC lib/event/scheduler_static.o 00:03:07.722 LIB libspdk_accel.a 00:03:07.722 SO libspdk_accel.so.16.0 00:03:07.722 LIB libspdk_fsdev.a 00:03:07.722 SYMLINK libspdk_accel.so 00:03:07.722 SO libspdk_fsdev.so.2.0 00:03:07.722 SYMLINK libspdk_fsdev.so 00:03:07.981 LIB libspdk_event.a 00:03:07.981 CC lib/bdev/bdev.o 00:03:07.981 CC lib/bdev/part.o 00:03:07.981 CC lib/bdev/bdev_zone.o 00:03:07.981 CC lib/bdev/scsi_nvme.o 00:03:07.981 CC lib/bdev/bdev_rpc.o 00:03:07.981 SO libspdk_event.so.14.0 00:03:07.981 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:07.981 SYMLINK libspdk_event.so 00:03:08.551 LIB libspdk_fuse_dispatcher.a 00:03:08.551 SO libspdk_fuse_dispatcher.so.1.0 00:03:08.551 SYMLINK libspdk_fuse_dispatcher.so 00:03:08.811 LIB libspdk_nvme.a 00:03:09.070 SO libspdk_nvme.so.15.0 00:03:09.326 SYMLINK libspdk_nvme.so 00:03:09.326 LIB libspdk_blob.a 00:03:09.586 SO libspdk_blob.so.12.0 00:03:09.586 SYMLINK libspdk_blob.so 00:03:09.847 CC lib/lvol/lvol.o 00:03:09.847 CC lib/blobfs/blobfs.o 00:03:09.847 CC lib/blobfs/tree.o 00:03:10.786 LIB libspdk_bdev.a 00:03:10.786 LIB libspdk_blobfs.a 00:03:10.786 SO libspdk_bdev.so.17.0 00:03:10.786 SO libspdk_blobfs.so.11.0 00:03:10.786 LIB libspdk_lvol.a 00:03:10.786 SYMLINK libspdk_bdev.so 00:03:10.786 SYMLINK libspdk_blobfs.so 00:03:10.786 SO libspdk_lvol.so.11.0 00:03:10.786 SYMLINK libspdk_lvol.so 00:03:10.786 CC lib/ftl/ftl_core.o 00:03:10.786 CC lib/ftl/ftl_init.o 00:03:10.786 CC lib/ftl/ftl_layout.o 00:03:10.786 CC lib/ftl/ftl_debug.o 00:03:10.786 CC lib/ublk/ublk.o 00:03:10.786 CC lib/ftl/ftl_io.o 00:03:10.786 CC lib/ublk/ublk_rpc.o 00:03:10.786 CC lib/nvmf/ctrlr.o 00:03:10.786 CC lib/nbd/nbd.o 00:03:10.786 CC lib/scsi/dev.o 00:03:11.045 CC lib/ftl/ftl_sb.o 00:03:11.045 CC lib/ftl/ftl_l2p.o 00:03:11.045 CC lib/scsi/lun.o 00:03:11.045 CC lib/ftl/ftl_l2p_flat.o 00:03:11.045 CC lib/nvmf/ctrlr_discovery.o 00:03:11.045 CC lib/ftl/ftl_nv_cache.o 00:03:11.045 CC lib/nbd/nbd_rpc.o 00:03:11.045 CC lib/nvmf/ctrlr_bdev.o 00:03:11.045 CC lib/ftl/ftl_band.o 00:03:11.306 CC lib/ftl/ftl_band_ops.o 00:03:11.306 CC lib/scsi/port.o 00:03:11.306 CC lib/ftl/ftl_writer.o 00:03:11.306 LIB libspdk_nbd.a 00:03:11.306 SO libspdk_nbd.so.7.0 00:03:11.306 CC lib/scsi/scsi.o 00:03:11.306 SYMLINK libspdk_nbd.so 00:03:11.306 CC lib/scsi/scsi_bdev.o 00:03:11.306 CC lib/scsi/scsi_pr.o 00:03:11.565 CC lib/ftl/ftl_rq.o 00:03:11.565 LIB libspdk_ublk.a 00:03:11.565 CC lib/ftl/ftl_reloc.o 00:03:11.565 SO libspdk_ublk.so.3.0 00:03:11.565 CC lib/ftl/ftl_l2p_cache.o 00:03:11.565 SYMLINK libspdk_ublk.so 00:03:11.565 CC lib/ftl/ftl_p2l.o 00:03:11.565 CC lib/ftl/ftl_p2l_log.o 00:03:11.565 CC lib/scsi/scsi_rpc.o 00:03:11.565 CC lib/scsi/task.o 00:03:11.565 CC lib/nvmf/subsystem.o 00:03:11.825 CC lib/nvmf/nvmf.o 00:03:11.825 CC lib/nvmf/nvmf_rpc.o 00:03:11.825 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.825 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.825 CC lib/nvmf/transport.o 00:03:11.825 LIB libspdk_scsi.a 00:03:12.085 SO libspdk_scsi.so.9.0 00:03:12.085 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.085 CC lib/nvmf/tcp.o 00:03:12.085 SYMLINK libspdk_scsi.so 00:03:12.085 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.085 CC lib/nvmf/stubs.o 00:03:12.085 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.345 CC lib/iscsi/conn.o 00:03:12.345 CC lib/vhost/vhost.o 00:03:12.345 CC lib/iscsi/init_grp.o 00:03:12.345 CC lib/iscsi/iscsi.o 00:03:12.345 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.606 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.606 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.606 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.606 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.606 CC lib/iscsi/param.o 00:03:12.606 CC lib/nvmf/mdns_server.o 00:03:12.606 CC lib/iscsi/portal_grp.o 00:03:12.606 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.866 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.866 CC lib/vhost/vhost_rpc.o 00:03:12.866 CC lib/vhost/vhost_scsi.o 00:03:12.866 CC lib/vhost/vhost_blk.o 00:03:12.866 CC lib/vhost/rte_vhost_user.o 00:03:12.866 CC lib/iscsi/tgt_node.o 00:03:13.126 CC lib/iscsi/iscsi_subsystem.o 00:03:13.126 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.126 CC lib/nvmf/rdma.o 00:03:13.126 CC lib/ftl/utils/ftl_conf.o 00:03:13.126 CC lib/ftl/utils/ftl_md.o 00:03:13.385 CC lib/ftl/utils/ftl_mempool.o 00:03:13.385 CC lib/nvmf/auth.o 00:03:13.385 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.385 CC lib/ftl/utils/ftl_property.o 00:03:13.385 CC lib/iscsi/iscsi_rpc.o 00:03:13.385 CC lib/iscsi/task.o 00:03:13.644 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.644 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.644 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.644 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.644 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.644 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.644 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.644 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.644 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.644 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.927 LIB libspdk_vhost.a 00:03:13.927 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.927 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:13.927 LIB libspdk_iscsi.a 00:03:13.927 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:13.927 CC lib/ftl/base/ftl_base_dev.o 00:03:13.927 SO libspdk_vhost.so.8.0 00:03:13.927 SO libspdk_iscsi.so.8.0 00:03:13.927 SYMLINK libspdk_vhost.so 00:03:13.927 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.927 CC lib/ftl/ftl_trace.o 00:03:13.927 SYMLINK libspdk_iscsi.so 00:03:14.188 LIB libspdk_ftl.a 00:03:14.188 SO libspdk_ftl.so.9.0 00:03:14.448 SYMLINK libspdk_ftl.so 00:03:15.019 LIB libspdk_nvmf.a 00:03:15.019 SO libspdk_nvmf.so.20.0 00:03:15.279 SYMLINK libspdk_nvmf.so 00:03:15.540 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.540 CC module/keyring/file/keyring.o 00:03:15.540 CC module/keyring/linux/keyring.o 00:03:15.540 CC module/accel/error/accel_error.o 00:03:15.540 CC module/accel/ioat/accel_ioat.o 00:03:15.540 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.540 CC module/fsdev/aio/fsdev_aio.o 00:03:15.540 CC module/sock/posix/posix.o 00:03:15.540 CC module/accel/dsa/accel_dsa.o 00:03:15.540 CC module/blob/bdev/blob_bdev.o 00:03:15.540 LIB libspdk_env_dpdk_rpc.a 00:03:15.540 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.540 CC module/keyring/file/keyring_rpc.o 00:03:15.540 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.540 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.540 CC module/keyring/linux/keyring_rpc.o 00:03:15.540 LIB libspdk_scheduler_dynamic.a 00:03:15.540 CC module/accel/error/accel_error_rpc.o 00:03:15.540 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.540 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.802 LIB libspdk_keyring_file.a 00:03:15.802 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.802 SO libspdk_keyring_file.so.2.0 00:03:15.802 LIB libspdk_keyring_linux.a 00:03:15.802 LIB libspdk_accel_error.a 00:03:15.802 LIB libspdk_accel_ioat.a 00:03:15.802 SO libspdk_accel_error.so.2.0 00:03:15.802 SO libspdk_accel_ioat.so.6.0 00:03:15.802 SO libspdk_keyring_linux.so.1.0 00:03:15.802 SYMLINK libspdk_keyring_file.so 00:03:15.802 SYMLINK libspdk_accel_ioat.so 00:03:15.802 LIB libspdk_blob_bdev.a 00:03:15.802 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:15.802 SYMLINK libspdk_keyring_linux.so 00:03:15.802 CC module/fsdev/aio/linux_aio_mgr.o 00:03:15.802 SO libspdk_blob_bdev.so.12.0 00:03:15.802 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.802 SYMLINK libspdk_accel_error.so 00:03:15.802 LIB libspdk_accel_dsa.a 00:03:15.802 SYMLINK libspdk_blob_bdev.so 00:03:15.802 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.802 SO libspdk_accel_dsa.so.5.0 00:03:16.062 SYMLINK libspdk_accel_dsa.so 00:03:16.062 CC module/accel/iaa/accel_iaa.o 00:03:16.062 LIB libspdk_scheduler_dpdk_governor.a 00:03:16.062 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:16.062 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.062 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.062 LIB libspdk_scheduler_gscheduler.a 00:03:16.062 SO libspdk_scheduler_gscheduler.so.4.0 00:03:16.062 CC module/bdev/gpt/gpt.o 00:03:16.062 CC module/bdev/error/vbdev_error.o 00:03:16.062 CC module/bdev/delay/vbdev_delay.o 00:03:16.062 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.062 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.062 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.062 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.062 LIB libspdk_fsdev_aio.a 00:03:16.062 LIB libspdk_accel_iaa.a 00:03:16.322 SO libspdk_fsdev_aio.so.1.0 00:03:16.322 SO libspdk_accel_iaa.so.3.0 00:03:16.322 CC module/bdev/malloc/bdev_malloc.o 00:03:16.322 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.322 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.322 SYMLINK libspdk_fsdev_aio.so 00:03:16.322 SYMLINK libspdk_accel_iaa.so 00:03:16.322 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.322 LIB libspdk_sock_posix.a 00:03:16.322 LIB libspdk_bdev_error.a 00:03:16.322 CC module/bdev/null/bdev_null.o 00:03:16.322 SO libspdk_sock_posix.so.6.0 00:03:16.322 SO libspdk_bdev_error.so.6.0 00:03:16.322 LIB libspdk_blobfs_bdev.a 00:03:16.322 CC module/bdev/nvme/bdev_nvme.o 00:03:16.322 SYMLINK libspdk_bdev_error.so 00:03:16.322 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.322 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.322 SO libspdk_blobfs_bdev.so.6.0 00:03:16.322 SYMLINK libspdk_sock_posix.so 00:03:16.322 CC module/bdev/nvme/nvme_rpc.o 00:03:16.322 LIB libspdk_bdev_delay.a 00:03:16.322 LIB libspdk_bdev_gpt.a 00:03:16.580 SO libspdk_bdev_gpt.so.6.0 00:03:16.580 SO libspdk_bdev_delay.so.6.0 00:03:16.580 SYMLINK libspdk_blobfs_bdev.so 00:03:16.580 CC module/bdev/null/bdev_null_rpc.o 00:03:16.580 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.580 SYMLINK libspdk_bdev_delay.so 00:03:16.580 CC module/bdev/nvme/vbdev_opal.o 00:03:16.580 SYMLINK libspdk_bdev_gpt.so 00:03:16.580 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.580 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.580 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.580 LIB libspdk_bdev_null.a 00:03:16.580 SO libspdk_bdev_null.so.6.0 00:03:16.580 LIB libspdk_bdev_malloc.a 00:03:16.580 SYMLINK libspdk_bdev_null.so 00:03:16.580 SO libspdk_bdev_malloc.so.6.0 00:03:16.580 LIB libspdk_bdev_lvol.a 00:03:16.840 SO libspdk_bdev_lvol.so.6.0 00:03:16.840 SYMLINK libspdk_bdev_malloc.so 00:03:16.840 SYMLINK libspdk_bdev_lvol.so 00:03:16.840 CC module/bdev/raid/bdev_raid.o 00:03:16.840 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.840 CC module/bdev/split/vbdev_split.o 00:03:16.840 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.840 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.840 CC module/bdev/xnvme/bdev_xnvme.o 00:03:16.840 CC module/bdev/aio/bdev_aio.o 00:03:16.840 CC module/bdev/ftl/bdev_ftl.o 00:03:16.840 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.098 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.098 LIB libspdk_bdev_passthru.a 00:03:17.098 SO libspdk_bdev_passthru.so.6.0 00:03:17.098 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.098 SYMLINK libspdk_bdev_passthru.so 00:03:17.098 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.098 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:17.098 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.098 LIB libspdk_bdev_split.a 00:03:17.098 SO libspdk_bdev_split.so.6.0 00:03:17.098 LIB libspdk_bdev_zone_block.a 00:03:17.098 LIB libspdk_bdev_aio.a 00:03:17.098 LIB libspdk_bdev_xnvme.a 00:03:17.098 SYMLINK libspdk_bdev_split.so 00:03:17.098 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.099 CC module/bdev/raid/raid0.o 00:03:17.357 SO libspdk_bdev_zone_block.so.6.0 00:03:17.357 SO libspdk_bdev_xnvme.so.3.0 00:03:17.357 SO libspdk_bdev_aio.so.6.0 00:03:17.357 LIB libspdk_bdev_ftl.a 00:03:17.357 SYMLINK libspdk_bdev_xnvme.so 00:03:17.357 SYMLINK libspdk_bdev_zone_block.so 00:03:17.357 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.357 SO libspdk_bdev_ftl.so.6.0 00:03:17.357 CC module/bdev/raid/raid1.o 00:03:17.357 SYMLINK libspdk_bdev_aio.so 00:03:17.357 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.357 CC module/bdev/raid/concat.o 00:03:17.357 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:17.357 SYMLINK libspdk_bdev_ftl.so 00:03:17.357 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:17.357 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:17.615 LIB libspdk_bdev_iscsi.a 00:03:17.615 SO libspdk_bdev_iscsi.so.6.0 00:03:17.615 SYMLINK libspdk_bdev_iscsi.so 00:03:17.615 LIB libspdk_bdev_virtio.a 00:03:17.875 SO libspdk_bdev_virtio.so.6.0 00:03:17.875 LIB libspdk_bdev_raid.a 00:03:17.875 SO libspdk_bdev_raid.so.6.0 00:03:17.875 SYMLINK libspdk_bdev_virtio.so 00:03:17.875 SYMLINK libspdk_bdev_raid.so 00:03:18.447 LIB libspdk_bdev_nvme.a 00:03:18.447 SO libspdk_bdev_nvme.so.7.1 00:03:18.704 SYMLINK libspdk_bdev_nvme.so 00:03:18.962 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.962 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.962 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.962 CC module/event/subsystems/vmd/vmd.o 00:03:18.962 CC module/event/subsystems/sock/sock.o 00:03:18.962 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.962 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.962 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.962 CC module/event/subsystems/keyring/keyring.o 00:03:18.962 LIB libspdk_event_scheduler.a 00:03:19.220 LIB libspdk_event_vhost_blk.a 00:03:19.220 LIB libspdk_event_fsdev.a 00:03:19.220 SO libspdk_event_scheduler.so.4.0 00:03:19.220 LIB libspdk_event_keyring.a 00:03:19.220 LIB libspdk_event_sock.a 00:03:19.220 SO libspdk_event_vhost_blk.so.3.0 00:03:19.220 LIB libspdk_event_vmd.a 00:03:19.220 SO libspdk_event_fsdev.so.1.0 00:03:19.220 LIB libspdk_event_iobuf.a 00:03:19.220 SO libspdk_event_keyring.so.1.0 00:03:19.220 SO libspdk_event_sock.so.5.0 00:03:19.220 SO libspdk_event_vmd.so.6.0 00:03:19.220 SYMLINK libspdk_event_vhost_blk.so 00:03:19.220 SYMLINK libspdk_event_scheduler.so 00:03:19.220 SO libspdk_event_iobuf.so.3.0 00:03:19.220 SYMLINK libspdk_event_fsdev.so 00:03:19.220 SYMLINK libspdk_event_keyring.so 00:03:19.220 SYMLINK libspdk_event_sock.so 00:03:19.220 SYMLINK libspdk_event_vmd.so 00:03:19.220 SYMLINK libspdk_event_iobuf.so 00:03:19.478 CC module/event/subsystems/accel/accel.o 00:03:19.478 LIB libspdk_event_accel.a 00:03:19.478 SO libspdk_event_accel.so.6.0 00:03:19.736 SYMLINK libspdk_event_accel.so 00:03:19.736 CC module/event/subsystems/bdev/bdev.o 00:03:19.994 LIB libspdk_event_bdev.a 00:03:19.994 SO libspdk_event_bdev.so.6.0 00:03:19.994 SYMLINK libspdk_event_bdev.so 00:03:20.252 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.252 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.252 CC module/event/subsystems/scsi/scsi.o 00:03:20.252 CC module/event/subsystems/ublk/ublk.o 00:03:20.252 CC module/event/subsystems/nbd/nbd.o 00:03:20.252 LIB libspdk_event_scsi.a 00:03:20.252 LIB libspdk_event_ublk.a 00:03:20.252 LIB libspdk_event_nbd.a 00:03:20.252 SO libspdk_event_scsi.so.6.0 00:03:20.252 SO libspdk_event_ublk.so.3.0 00:03:20.252 SO libspdk_event_nbd.so.6.0 00:03:20.510 SYMLINK libspdk_event_scsi.so 00:03:20.510 SYMLINK libspdk_event_ublk.so 00:03:20.510 SYMLINK libspdk_event_nbd.so 00:03:20.510 LIB libspdk_event_nvmf.a 00:03:20.510 SO libspdk_event_nvmf.so.6.0 00:03:20.510 SYMLINK libspdk_event_nvmf.so 00:03:20.510 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.510 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.767 LIB libspdk_event_iscsi.a 00:03:20.767 LIB libspdk_event_vhost_scsi.a 00:03:20.767 SO libspdk_event_iscsi.so.6.0 00:03:20.767 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.767 SYMLINK libspdk_event_iscsi.so 00:03:20.767 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.767 SO libspdk.so.6.0 00:03:21.024 SYMLINK libspdk.so 00:03:21.024 TEST_HEADER include/spdk/accel.h 00:03:21.024 TEST_HEADER include/spdk/accel_module.h 00:03:21.024 CC app/trace_record/trace_record.o 00:03:21.024 TEST_HEADER include/spdk/assert.h 00:03:21.024 CXX app/trace/trace.o 00:03:21.024 TEST_HEADER include/spdk/barrier.h 00:03:21.024 TEST_HEADER include/spdk/base64.h 00:03:21.024 TEST_HEADER include/spdk/bdev.h 00:03:21.024 TEST_HEADER include/spdk/bdev_module.h 00:03:21.024 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.024 TEST_HEADER include/spdk/bit_array.h 00:03:21.024 TEST_HEADER include/spdk/bit_pool.h 00:03:21.024 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.024 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.024 TEST_HEADER include/spdk/blobfs.h 00:03:21.024 TEST_HEADER include/spdk/blob.h 00:03:21.024 TEST_HEADER include/spdk/conf.h 00:03:21.024 TEST_HEADER include/spdk/config.h 00:03:21.024 TEST_HEADER include/spdk/cpuset.h 00:03:21.024 TEST_HEADER include/spdk/crc16.h 00:03:21.024 TEST_HEADER include/spdk/crc32.h 00:03:21.024 TEST_HEADER include/spdk/crc64.h 00:03:21.024 TEST_HEADER include/spdk/dif.h 00:03:21.024 TEST_HEADER include/spdk/dma.h 00:03:21.024 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.024 TEST_HEADER include/spdk/endian.h 00:03:21.024 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.024 TEST_HEADER include/spdk/env.h 00:03:21.024 TEST_HEADER include/spdk/event.h 00:03:21.024 TEST_HEADER include/spdk/fd_group.h 00:03:21.024 TEST_HEADER include/spdk/fd.h 00:03:21.024 TEST_HEADER include/spdk/file.h 00:03:21.024 TEST_HEADER include/spdk/fsdev.h 00:03:21.024 TEST_HEADER include/spdk/fsdev_module.h 00:03:21.024 TEST_HEADER include/spdk/ftl.h 00:03:21.024 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:21.024 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.024 CC examples/util/zipf/zipf.o 00:03:21.024 CC examples/ioat/perf/perf.o 00:03:21.024 TEST_HEADER include/spdk/hexlify.h 00:03:21.024 TEST_HEADER include/spdk/histogram_data.h 00:03:21.024 CC test/thread/poller_perf/poller_perf.o 00:03:21.024 TEST_HEADER include/spdk/idxd.h 00:03:21.024 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.024 TEST_HEADER include/spdk/init.h 00:03:21.024 TEST_HEADER include/spdk/ioat.h 00:03:21.024 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.024 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.024 TEST_HEADER include/spdk/json.h 00:03:21.024 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.024 TEST_HEADER include/spdk/keyring.h 00:03:21.024 TEST_HEADER include/spdk/keyring_module.h 00:03:21.283 TEST_HEADER include/spdk/likely.h 00:03:21.283 CC test/dma/test_dma/test_dma.o 00:03:21.283 TEST_HEADER include/spdk/log.h 00:03:21.283 TEST_HEADER include/spdk/lvol.h 00:03:21.283 TEST_HEADER include/spdk/md5.h 00:03:21.283 TEST_HEADER include/spdk/memory.h 00:03:21.283 TEST_HEADER include/spdk/mmio.h 00:03:21.283 TEST_HEADER include/spdk/nbd.h 00:03:21.283 TEST_HEADER include/spdk/net.h 00:03:21.283 TEST_HEADER include/spdk/notify.h 00:03:21.283 TEST_HEADER include/spdk/nvme.h 00:03:21.283 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.283 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.283 CC test/app/bdev_svc/bdev_svc.o 00:03:21.283 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.283 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.283 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.283 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.283 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.283 TEST_HEADER include/spdk/nvmf.h 00:03:21.283 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.283 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.283 TEST_HEADER include/spdk/opal.h 00:03:21.283 TEST_HEADER include/spdk/opal_spec.h 00:03:21.283 TEST_HEADER include/spdk/pci_ids.h 00:03:21.283 TEST_HEADER include/spdk/pipe.h 00:03:21.283 TEST_HEADER include/spdk/queue.h 00:03:21.283 TEST_HEADER include/spdk/reduce.h 00:03:21.283 TEST_HEADER include/spdk/rpc.h 00:03:21.283 TEST_HEADER include/spdk/scheduler.h 00:03:21.283 TEST_HEADER include/spdk/scsi.h 00:03:21.283 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.283 TEST_HEADER include/spdk/sock.h 00:03:21.283 TEST_HEADER include/spdk/stdinc.h 00:03:21.283 TEST_HEADER include/spdk/string.h 00:03:21.283 TEST_HEADER include/spdk/thread.h 00:03:21.283 TEST_HEADER include/spdk/trace.h 00:03:21.283 TEST_HEADER include/spdk/trace_parser.h 00:03:21.283 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.283 TEST_HEADER include/spdk/tree.h 00:03:21.283 TEST_HEADER include/spdk/ublk.h 00:03:21.283 TEST_HEADER include/spdk/util.h 00:03:21.283 TEST_HEADER include/spdk/uuid.h 00:03:21.283 TEST_HEADER include/spdk/version.h 00:03:21.283 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:21.283 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:21.283 LINK interrupt_tgt 00:03:21.283 TEST_HEADER include/spdk/vhost.h 00:03:21.283 TEST_HEADER include/spdk/vmd.h 00:03:21.283 TEST_HEADER include/spdk/xor.h 00:03:21.283 TEST_HEADER include/spdk/zipf.h 00:03:21.283 CXX test/cpp_headers/accel.o 00:03:21.283 LINK zipf 00:03:21.283 LINK poller_perf 00:03:21.283 LINK spdk_trace_record 00:03:21.283 LINK ioat_perf 00:03:21.283 LINK bdev_svc 00:03:21.283 CXX test/cpp_headers/accel_module.o 00:03:21.283 CXX test/cpp_headers/assert.o 00:03:21.541 LINK spdk_trace 00:03:21.541 CXX test/cpp_headers/barrier.o 00:03:21.541 CC test/app/histogram_perf/histogram_perf.o 00:03:21.541 CC test/app/jsoncat/jsoncat.o 00:03:21.541 CC examples/ioat/verify/verify.o 00:03:21.541 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.541 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.541 LINK mem_callbacks 00:03:21.541 CXX test/cpp_headers/base64.o 00:03:21.541 LINK jsoncat 00:03:21.541 LINK histogram_perf 00:03:21.541 LINK test_dma 00:03:21.541 CC app/nvmf_tgt/nvmf_main.o 00:03:21.799 LINK verify 00:03:21.799 CXX test/cpp_headers/bdev.o 00:03:21.799 CC examples/thread/thread/thread_ex.o 00:03:21.799 CXX test/cpp_headers/bdev_module.o 00:03:21.799 CC test/env/vtophys/vtophys.o 00:03:21.799 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.799 LINK nvmf_tgt 00:03:21.799 CXX test/cpp_headers/bdev_zone.o 00:03:21.799 LINK nvme_fuzz 00:03:21.799 LINK vtophys 00:03:21.799 CC examples/sock/hello_world/hello_sock.o 00:03:22.057 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.057 LINK thread 00:03:22.057 CXX test/cpp_headers/bit_array.o 00:03:22.057 CC examples/idxd/perf/perf.o 00:03:22.057 CC examples/vmd/lsvmd/lsvmd.o 00:03:22.057 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.057 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.057 LINK hello_sock 00:03:22.057 CXX test/cpp_headers/bit_pool.o 00:03:22.057 CC app/spdk_tgt/spdk_tgt.o 00:03:22.057 LINK lsvmd 00:03:22.316 CC examples/vmd/led/led.o 00:03:22.316 LINK env_dpdk_post_init 00:03:22.316 LINK vhost_fuzz 00:03:22.316 CXX test/cpp_headers/blob_bdev.o 00:03:22.316 LINK iscsi_tgt 00:03:22.316 CC test/rpc_client/rpc_client_test.o 00:03:22.316 LINK led 00:03:22.316 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.316 LINK spdk_tgt 00:03:22.316 CXX test/cpp_headers/blobfs.o 00:03:22.316 CC test/env/memory/memory_ut.o 00:03:22.316 LINK idxd_perf 00:03:22.316 CC test/app/stub/stub.o 00:03:22.574 LINK rpc_client_test 00:03:22.574 CXX test/cpp_headers/blob.o 00:03:22.574 CXX test/cpp_headers/conf.o 00:03:22.574 CC app/spdk_lspci/spdk_lspci.o 00:03:22.574 LINK stub 00:03:22.574 LINK spdk_lspci 00:03:22.574 CC test/accel/dif/dif.o 00:03:22.574 CC examples/accel/perf/accel_perf.o 00:03:22.574 CXX test/cpp_headers/config.o 00:03:22.574 CC test/blobfs/mkfs/mkfs.o 00:03:22.574 CXX test/cpp_headers/cpuset.o 00:03:22.832 CC test/event/event_perf/event_perf.o 00:03:22.832 CC test/event/reactor/reactor.o 00:03:22.832 CC examples/blob/hello_world/hello_blob.o 00:03:22.832 CC app/spdk_nvme_perf/perf.o 00:03:22.832 CXX test/cpp_headers/crc16.o 00:03:22.832 LINK mkfs 00:03:22.832 LINK event_perf 00:03:22.832 LINK reactor 00:03:22.832 CXX test/cpp_headers/crc32.o 00:03:23.132 LINK hello_blob 00:03:23.132 CXX test/cpp_headers/crc64.o 00:03:23.132 CC test/event/reactor_perf/reactor_perf.o 00:03:23.132 CC test/event/app_repeat/app_repeat.o 00:03:23.132 LINK iscsi_fuzz 00:03:23.132 LINK accel_perf 00:03:23.132 CC examples/nvme/hello_world/hello_world.o 00:03:23.132 LINK reactor_perf 00:03:23.132 CXX test/cpp_headers/dif.o 00:03:23.132 LINK app_repeat 00:03:23.389 CC examples/blob/cli/blobcli.o 00:03:23.389 CC test/event/scheduler/scheduler.o 00:03:23.389 LINK dif 00:03:23.389 LINK hello_world 00:03:23.389 CC app/spdk_nvme_identify/identify.o 00:03:23.389 CXX test/cpp_headers/dma.o 00:03:23.389 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.389 LINK memory_ut 00:03:23.389 CXX test/cpp_headers/endian.o 00:03:23.646 LINK scheduler 00:03:23.646 CC examples/nvme/reconnect/reconnect.o 00:03:23.646 CXX test/cpp_headers/env_dpdk.o 00:03:23.646 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.646 CC test/lvol/esnap/esnap.o 00:03:23.646 LINK spdk_nvme_discover 00:03:23.646 CC test/env/pci/pci_ut.o 00:03:23.646 LINK spdk_nvme_perf 00:03:23.646 CXX test/cpp_headers/env.o 00:03:23.646 LINK blobcli 00:03:23.646 CC test/nvme/aer/aer.o 00:03:23.905 CC test/nvme/reset/reset.o 00:03:23.905 CC test/nvme/sgl/sgl.o 00:03:23.905 CXX test/cpp_headers/event.o 00:03:23.905 LINK reconnect 00:03:23.905 CC examples/nvme/arbitration/arbitration.o 00:03:23.905 CXX test/cpp_headers/fd_group.o 00:03:23.905 CXX test/cpp_headers/fd.o 00:03:23.905 LINK reset 00:03:24.163 LINK pci_ut 00:03:24.163 LINK aer 00:03:24.163 LINK nvme_manage 00:03:24.163 LINK sgl 00:03:24.163 CXX test/cpp_headers/file.o 00:03:24.163 CXX test/cpp_headers/fsdev.o 00:03:24.163 LINK spdk_nvme_identify 00:03:24.163 LINK arbitration 00:03:24.163 CXX test/cpp_headers/fsdev_module.o 00:03:24.163 CC test/bdev/bdevio/bdevio.o 00:03:24.163 CC examples/nvme/hotplug/hotplug.o 00:03:24.163 CC test/nvme/e2edp/nvme_dp.o 00:03:24.421 CC test/nvme/overhead/overhead.o 00:03:24.421 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.421 CXX test/cpp_headers/ftl.o 00:03:24.421 CC examples/nvme/abort/abort.o 00:03:24.421 CC app/spdk_top/spdk_top.o 00:03:24.421 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:24.421 LINK cmb_copy 00:03:24.421 LINK hotplug 00:03:24.421 LINK nvme_dp 00:03:24.421 CXX test/cpp_headers/fuse_dispatcher.o 00:03:24.421 CXX test/cpp_headers/gpt_spec.o 00:03:24.679 LINK bdevio 00:03:24.679 LINK overhead 00:03:24.679 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.679 CXX test/cpp_headers/hexlify.o 00:03:24.679 LINK hello_fsdev 00:03:24.679 LINK abort 00:03:24.679 CC app/vhost/vhost.o 00:03:24.679 CC app/spdk_dd/spdk_dd.o 00:03:24.679 LINK pmr_persistence 00:03:24.679 CC test/nvme/err_injection/err_injection.o 00:03:24.679 CXX test/cpp_headers/histogram_data.o 00:03:24.938 CC test/nvme/startup/startup.o 00:03:24.938 LINK vhost 00:03:24.938 CC test/nvme/reserve/reserve.o 00:03:24.938 CC test/nvme/simple_copy/simple_copy.o 00:03:24.938 CXX test/cpp_headers/idxd.o 00:03:24.938 LINK err_injection 00:03:24.938 LINK startup 00:03:24.938 LINK reserve 00:03:24.938 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.938 CXX test/cpp_headers/idxd_spec.o 00:03:25.197 LINK spdk_dd 00:03:25.197 LINK simple_copy 00:03:25.197 CC test/nvme/connect_stress/connect_stress.o 00:03:25.197 CC test/nvme/boot_partition/boot_partition.o 00:03:25.197 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.197 CXX test/cpp_headers/init.o 00:03:25.197 CC test/nvme/compliance/nvme_compliance.o 00:03:25.197 CXX test/cpp_headers/ioat.o 00:03:25.197 LINK spdk_top 00:03:25.197 LINK hello_bdev 00:03:25.197 LINK boot_partition 00:03:25.197 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.456 LINK connect_stress 00:03:25.456 CXX test/cpp_headers/ioat_spec.o 00:03:25.456 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.456 LINK nvme_compliance 00:03:25.456 LINK fused_ordering 00:03:25.456 CC test/nvme/fdp/fdp.o 00:03:25.456 CC test/nvme/cuse/cuse.o 00:03:25.456 CC app/fio/nvme/fio_plugin.o 00:03:25.456 CXX test/cpp_headers/iscsi_spec.o 00:03:25.715 CC app/fio/bdev/fio_plugin.o 00:03:25.715 LINK doorbell_aers 00:03:25.715 CXX test/cpp_headers/json.o 00:03:25.715 CXX test/cpp_headers/jsonrpc.o 00:03:25.715 CXX test/cpp_headers/keyring.o 00:03:25.715 CXX test/cpp_headers/keyring_module.o 00:03:25.715 CXX test/cpp_headers/likely.o 00:03:25.715 CXX test/cpp_headers/log.o 00:03:25.715 CXX test/cpp_headers/lvol.o 00:03:25.715 LINK fdp 00:03:25.974 CXX test/cpp_headers/md5.o 00:03:25.974 CXX test/cpp_headers/memory.o 00:03:25.974 CXX test/cpp_headers/mmio.o 00:03:25.974 CXX test/cpp_headers/nbd.o 00:03:25.974 CXX test/cpp_headers/notify.o 00:03:25.974 CXX test/cpp_headers/net.o 00:03:25.974 LINK spdk_bdev 00:03:25.974 LINK bdevperf 00:03:25.974 CXX test/cpp_headers/nvme.o 00:03:25.974 LINK spdk_nvme 00:03:25.974 CXX test/cpp_headers/nvme_intel.o 00:03:25.974 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.974 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.974 CXX test/cpp_headers/nvme_spec.o 00:03:26.232 CXX test/cpp_headers/nvme_zns.o 00:03:26.232 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.232 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.232 CXX test/cpp_headers/nvmf.o 00:03:26.232 CXX test/cpp_headers/nvmf_spec.o 00:03:26.232 CXX test/cpp_headers/nvmf_transport.o 00:03:26.232 CXX test/cpp_headers/opal.o 00:03:26.232 CXX test/cpp_headers/opal_spec.o 00:03:26.232 CXX test/cpp_headers/pci_ids.o 00:03:26.232 CC examples/nvmf/nvmf/nvmf.o 00:03:26.232 CXX test/cpp_headers/pipe.o 00:03:26.232 CXX test/cpp_headers/queue.o 00:03:26.232 CXX test/cpp_headers/reduce.o 00:03:26.232 CXX test/cpp_headers/rpc.o 00:03:26.491 CXX test/cpp_headers/scheduler.o 00:03:26.491 CXX test/cpp_headers/scsi.o 00:03:26.491 CXX test/cpp_headers/scsi_spec.o 00:03:26.491 CXX test/cpp_headers/sock.o 00:03:26.491 CXX test/cpp_headers/stdinc.o 00:03:26.491 CXX test/cpp_headers/string.o 00:03:26.491 CXX test/cpp_headers/thread.o 00:03:26.491 CXX test/cpp_headers/trace.o 00:03:26.491 CXX test/cpp_headers/trace_parser.o 00:03:26.491 CXX test/cpp_headers/tree.o 00:03:26.491 CXX test/cpp_headers/ublk.o 00:03:26.491 CXX test/cpp_headers/util.o 00:03:26.491 CXX test/cpp_headers/uuid.o 00:03:26.491 CXX test/cpp_headers/version.o 00:03:26.491 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.491 LINK nvmf 00:03:26.491 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.749 CXX test/cpp_headers/vhost.o 00:03:26.749 CXX test/cpp_headers/vmd.o 00:03:26.749 CXX test/cpp_headers/xor.o 00:03:26.749 CXX test/cpp_headers/zipf.o 00:03:26.749 LINK cuse 00:03:28.669 LINK esnap 00:03:28.927 00:03:28.927 real 1m5.419s 00:03:28.927 user 5m59.268s 00:03:28.927 sys 1m1.954s 00:03:28.927 20:30:45 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:28.927 20:30:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.927 ************************************ 00:03:28.927 END TEST make 00:03:28.927 ************************************ 00:03:28.927 20:30:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.927 20:30:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.927 20:30:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.927 20:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.927 20:30:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.927 20:30:45 -- pm/common@44 -- $ pid=5068 00:03:28.927 20:30:45 -- pm/common@50 -- $ kill -TERM 5068 00:03:28.927 20:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.927 20:30:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.927 20:30:45 -- pm/common@44 -- $ pid=5069 00:03:28.927 20:30:45 -- pm/common@50 -- $ kill -TERM 5069 00:03:28.927 20:30:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:28.927 20:30:45 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:28.927 20:30:45 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:28.927 20:30:45 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:28.927 20:30:45 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:28.927 20:30:45 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:28.927 20:30:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.927 20:30:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.927 20:30:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.927 20:30:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.927 20:30:45 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.927 20:30:45 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.927 20:30:45 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.927 20:30:45 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.927 20:30:45 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.927 20:30:45 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.927 20:30:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.927 20:30:45 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.927 20:30:45 -- scripts/common.sh@345 -- # : 1 00:03:28.927 20:30:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.927 20:30:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.927 20:30:45 -- scripts/common.sh@365 -- # decimal 1 00:03:28.927 20:30:45 -- scripts/common.sh@353 -- # local d=1 00:03:28.927 20:30:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.927 20:30:45 -- scripts/common.sh@355 -- # echo 1 00:03:28.927 20:30:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.927 20:30:45 -- scripts/common.sh@366 -- # decimal 2 00:03:28.927 20:30:45 -- scripts/common.sh@353 -- # local d=2 00:03:28.927 20:30:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.927 20:30:45 -- scripts/common.sh@355 -- # echo 2 00:03:28.927 20:30:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.927 20:30:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.927 20:30:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.927 20:30:45 -- scripts/common.sh@368 -- # return 0 00:03:28.927 20:30:45 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.927 20:30:45 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.927 --rc genhtml_branch_coverage=1 00:03:28.927 --rc genhtml_function_coverage=1 00:03:28.927 --rc genhtml_legend=1 00:03:28.927 --rc geninfo_all_blocks=1 00:03:28.927 --rc geninfo_unexecuted_blocks=1 00:03:28.927 00:03:28.927 ' 00:03:28.927 20:30:45 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.927 --rc genhtml_branch_coverage=1 00:03:28.927 --rc genhtml_function_coverage=1 00:03:28.927 --rc genhtml_legend=1 00:03:28.927 --rc geninfo_all_blocks=1 00:03:28.927 --rc geninfo_unexecuted_blocks=1 00:03:28.927 00:03:28.927 ' 00:03:28.927 20:30:45 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.927 --rc genhtml_branch_coverage=1 00:03:28.927 --rc genhtml_function_coverage=1 00:03:28.927 --rc genhtml_legend=1 00:03:28.927 --rc geninfo_all_blocks=1 00:03:28.927 --rc geninfo_unexecuted_blocks=1 00:03:28.927 00:03:28.927 ' 00:03:28.927 20:30:45 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.927 --rc genhtml_branch_coverage=1 00:03:28.927 --rc genhtml_function_coverage=1 00:03:28.927 --rc genhtml_legend=1 00:03:28.927 --rc geninfo_all_blocks=1 00:03:28.927 --rc geninfo_unexecuted_blocks=1 00:03:28.927 00:03:28.927 ' 00:03:28.927 20:30:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.927 20:30:45 -- nvmf/common.sh@7 -- # uname -s 00:03:28.927 20:30:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.927 20:30:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.927 20:30:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.927 20:30:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.927 20:30:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.927 20:30:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.927 20:30:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.927 20:30:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.927 20:30:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.927 20:30:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.927 20:30:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f72aea25-a2ce-4611-a8a8-77f4a743cbb5 00:03:28.928 20:30:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=f72aea25-a2ce-4611-a8a8-77f4a743cbb5 00:03:28.928 20:30:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.928 20:30:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.928 20:30:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:28.928 20:30:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.928 20:30:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.928 20:30:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.928 20:30:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.928 20:30:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.928 20:30:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.928 20:30:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.928 20:30:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.928 20:30:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.928 20:30:46 -- paths/export.sh@5 -- # export PATH 00:03:28.928 20:30:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.928 20:30:46 -- nvmf/common.sh@51 -- # : 0 00:03:28.928 20:30:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.928 20:30:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.928 20:30:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.928 20:30:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.928 20:30:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.928 20:30:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.928 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.928 20:30:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.928 20:30:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.928 20:30:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.928 20:30:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.928 20:30:46 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.928 20:30:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.928 20:30:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.928 20:30:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.928 20:30:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.928 20:30:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.928 20:30:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.185 20:30:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:29.185 20:30:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:29.185 20:30:46 -- spdk/autotest.sh@48 -- # udevadm_pid=54225 00:03:29.185 20:30:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:29.185 20:30:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:29.185 20:30:46 -- pm/common@17 -- # local monitor 00:03:29.185 20:30:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.185 20:30:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.185 20:30:46 -- pm/common@25 -- # sleep 1 00:03:29.185 20:30:46 -- pm/common@21 -- # date +%s 00:03:29.185 20:30:46 -- pm/common@21 -- # date +%s 00:03:29.185 20:30:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733517046 00:03:29.185 20:30:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733517046 00:03:29.185 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733517046_collect-vmstat.pm.log 00:03:29.185 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733517046_collect-cpu-load.pm.log 00:03:30.118 20:30:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.118 20:30:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.118 20:30:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.118 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:30.118 20:30:47 -- spdk/autotest.sh@59 -- # create_test_list 00:03:30.118 20:30:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:30.118 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:30.118 20:30:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:30.118 20:30:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:30.118 20:30:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:30.118 20:30:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:30.118 20:30:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.118 20:30:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:30.118 20:30:47 -- common/autotest_common.sh@1457 -- # uname 00:03:30.118 20:30:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:30.118 20:30:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:30.118 20:30:47 -- common/autotest_common.sh@1477 -- # uname 00:03:30.118 20:30:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:30.118 20:30:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:30.118 20:30:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:30.118 lcov: LCOV version 1.15 00:03:30.118 20:30:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:44.994 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.994 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:03.125 20:31:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:03.125 20:31:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.125 20:31:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.125 20:31:17 -- spdk/autotest.sh@78 -- # rm -f 00:04:03.125 20:31:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.125 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:03.125 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:03.125 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:03.125 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:03.125 20:31:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.125 20:31:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:03.125 20:31:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:03.125 20:31:18 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:03.126 20:31:18 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:03.126 20:31:18 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:03.126 20:31:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:03.126 20:31:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:03.126 20:31:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:03.126 20:31:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:03.126 20:31:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:03.126 20:31:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:03.126 20:31:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:03.126 20:31:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:03.126 20:31:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.126 20:31:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:03.126 20:31:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:03.126 20:31:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.126 20:31:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.126 20:31:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.126 20:31:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.126 20:31:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.126 No valid GPT data, bailing 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # pt= 00:04:03.126 20:31:18 -- scripts/common.sh@395 -- # return 1 00:04:03.126 20:31:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.126 1+0 records in 00:04:03.126 1+0 records out 00:04:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030589 s, 34.3 MB/s 00:04:03.126 20:31:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.126 20:31:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:03.126 20:31:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:03.126 20:31:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:03.126 No valid GPT data, bailing 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # pt= 00:04:03.126 20:31:18 -- scripts/common.sh@395 -- # return 1 00:04:03.126 20:31:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:03.126 1+0 records in 00:04:03.126 1+0 records out 00:04:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661209 s, 159 MB/s 00:04:03.126 20:31:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.126 20:31:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:03.126 20:31:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:03.126 20:31:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:03.126 No valid GPT data, bailing 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # pt= 00:04:03.126 20:31:18 -- scripts/common.sh@395 -- # return 1 00:04:03.126 20:31:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:03.126 1+0 records in 00:04:03.126 1+0 records out 00:04:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488343 s, 215 MB/s 00:04:03.126 20:31:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.126 20:31:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:03.126 20:31:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:03.126 20:31:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:03.126 No valid GPT data, bailing 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # pt= 00:04:03.126 20:31:18 -- scripts/common.sh@395 -- # return 1 00:04:03.126 20:31:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:03.126 1+0 records in 00:04:03.126 1+0 records out 00:04:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722611 s, 145 MB/s 00:04:03.126 20:31:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.126 20:31:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:03.126 20:31:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:03.126 20:31:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:03.126 No valid GPT data, bailing 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # pt= 00:04:03.126 20:31:18 -- scripts/common.sh@395 -- # return 1 00:04:03.126 20:31:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:03.126 1+0 records in 00:04:03.126 1+0 records out 00:04:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516777 s, 203 MB/s 00:04:03.126 20:31:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.126 20:31:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.126 20:31:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:03.126 20:31:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:03.126 20:31:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:03.126 No valid GPT data, bailing 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:03.126 20:31:18 -- scripts/common.sh@394 -- # pt= 00:04:03.126 20:31:18 -- scripts/common.sh@395 -- # return 1 00:04:03.126 20:31:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:03.126 1+0 records in 00:04:03.126 1+0 records out 00:04:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00603683 s, 174 MB/s 00:04:03.126 20:31:18 -- spdk/autotest.sh@105 -- # sync 00:04:03.126 20:31:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.126 20:31:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.126 20:31:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:03.700 20:31:20 -- spdk/autotest.sh@111 -- # uname -s 00:04:03.700 20:31:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:03.700 20:31:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:03.700 20:31:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.535 Hugepages 00:04:04.535 node hugesize free / total 00:04:04.535 node0 1048576kB 0 / 0 00:04:04.535 node0 2048kB 0 / 0 00:04:04.535 00:04:04.535 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.535 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:04.797 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.797 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:04.797 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:04.797 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:04.798 20:31:21 -- spdk/autotest.sh@117 -- # uname -s 00:04:04.798 20:31:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:04.798 20:31:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:04.798 20:31:21 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.943 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.943 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.943 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.943 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.943 20:31:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.886 20:31:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.886 20:31:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.886 20:31:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.886 20:31:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.886 20:31:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.886 20:31:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.886 20:31:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.886 20:31:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.886 20:31:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.886 20:31:23 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:06.886 20:31:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:06.886 20:31:23 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.460 Waiting for block devices as requested 00:04:07.460 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.460 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.721 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.721 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.010 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:13.010 20:31:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:13.010 20:31:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:13.010 20:31:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:13.010 20:31:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:13.010 20:31:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:13.010 20:31:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:13.010 20:31:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:13.010 20:31:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:13.010 20:31:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1543 -- # continue 00:04:13.010 20:31:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:13.010 20:31:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:13.010 20:31:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:13.010 20:31:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:13.010 20:31:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1543 -- # continue 00:04:13.010 20:31:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:13.010 20:31:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:13.010 20:31:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:13.010 20:31:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:13.010 20:31:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:13.010 20:31:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:13.010 20:31:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:13.011 20:31:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:13.011 20:31:29 -- common/autotest_common.sh@1543 -- # continue 00:04:13.011 20:31:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:13.011 20:31:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:13.011 20:31:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:13.011 20:31:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:13.011 20:31:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:13.011 20:31:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:13.011 20:31:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:13.011 20:31:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:13.011 20:31:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:13.011 20:31:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:13.011 20:31:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:13.011 20:31:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:13.011 20:31:29 -- common/autotest_common.sh@1543 -- # continue 00:04:13.011 20:31:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:13.011 20:31:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.011 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.011 20:31:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:13.011 20:31:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.011 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.011 20:31:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.838 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.838 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.838 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.838 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.838 20:31:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.838 20:31:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.838 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:14.098 20:31:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:14.098 20:31:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:14.098 20:31:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.098 20:31:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:14.098 20:31:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:14.098 20:31:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:14.098 20:31:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:14.098 20:31:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:14.098 20:31:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:14.098 20:31:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:14.098 20:31:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.098 20:31:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:14.098 20:31:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:14.098 20:31:31 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:14.098 20:31:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:14.098 20:31:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.098 20:31:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.098 20:31:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.098 20:31:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.098 20:31:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.098 20:31:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.098 20:31:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:14.098 20:31:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.098 20:31:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.098 20:31:31 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:14.098 20:31:31 -- common/autotest_common.sh@1572 -- # return 0 00:04:14.098 20:31:31 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:14.098 20:31:31 -- common/autotest_common.sh@1580 -- # return 0 00:04:14.098 20:31:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:14.098 20:31:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:14.098 20:31:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.098 20:31:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.098 20:31:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:14.098 20:31:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.098 20:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:14.098 20:31:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:14.098 20:31:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:14.098 20:31:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.098 20:31:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.098 20:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:14.098 ************************************ 00:04:14.098 START TEST env 00:04:14.098 ************************************ 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:14.099 * Looking for test storage... 00:04:14.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.099 20:31:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.099 20:31:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.099 20:31:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.099 20:31:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.099 20:31:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.099 20:31:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.099 20:31:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.099 20:31:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.099 20:31:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.099 20:31:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.099 20:31:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.099 20:31:31 env -- scripts/common.sh@344 -- # case "$op" in 00:04:14.099 20:31:31 env -- scripts/common.sh@345 -- # : 1 00:04:14.099 20:31:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.099 20:31:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.099 20:31:31 env -- scripts/common.sh@365 -- # decimal 1 00:04:14.099 20:31:31 env -- scripts/common.sh@353 -- # local d=1 00:04:14.099 20:31:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.099 20:31:31 env -- scripts/common.sh@355 -- # echo 1 00:04:14.099 20:31:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.099 20:31:31 env -- scripts/common.sh@366 -- # decimal 2 00:04:14.099 20:31:31 env -- scripts/common.sh@353 -- # local d=2 00:04:14.099 20:31:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.099 20:31:31 env -- scripts/common.sh@355 -- # echo 2 00:04:14.099 20:31:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.099 20:31:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.099 20:31:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.099 20:31:31 env -- scripts/common.sh@368 -- # return 0 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.099 --rc genhtml_branch_coverage=1 00:04:14.099 --rc genhtml_function_coverage=1 00:04:14.099 --rc genhtml_legend=1 00:04:14.099 --rc geninfo_all_blocks=1 00:04:14.099 --rc geninfo_unexecuted_blocks=1 00:04:14.099 00:04:14.099 ' 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.099 --rc genhtml_branch_coverage=1 00:04:14.099 --rc genhtml_function_coverage=1 00:04:14.099 --rc genhtml_legend=1 00:04:14.099 --rc geninfo_all_blocks=1 00:04:14.099 --rc geninfo_unexecuted_blocks=1 00:04:14.099 00:04:14.099 ' 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.099 --rc genhtml_branch_coverage=1 00:04:14.099 --rc genhtml_function_coverage=1 00:04:14.099 --rc genhtml_legend=1 00:04:14.099 --rc geninfo_all_blocks=1 00:04:14.099 --rc geninfo_unexecuted_blocks=1 00:04:14.099 00:04:14.099 ' 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.099 --rc genhtml_branch_coverage=1 00:04:14.099 --rc genhtml_function_coverage=1 00:04:14.099 --rc genhtml_legend=1 00:04:14.099 --rc geninfo_all_blocks=1 00:04:14.099 --rc geninfo_unexecuted_blocks=1 00:04:14.099 00:04:14.099 ' 00:04:14.099 20:31:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.099 20:31:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.099 20:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.099 ************************************ 00:04:14.099 START TEST env_memory 00:04:14.099 ************************************ 00:04:14.099 20:31:31 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:14.360 00:04:14.361 00:04:14.361 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.361 http://cunit.sourceforge.net/ 00:04:14.361 00:04:14.361 00:04:14.361 Suite: memory 00:04:14.361 Test: alloc and free memory map ...[2024-12-06 20:31:31.262424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:14.361 passed 00:04:14.361 Test: mem map translation ...[2024-12-06 20:31:31.301592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:14.361 [2024-12-06 20:31:31.301768] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:14.361 [2024-12-06 20:31:31.302132] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:14.361 [2024-12-06 20:31:31.302253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:14.361 passed 00:04:14.361 Test: mem map registration ...[2024-12-06 20:31:31.370687] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:14.361 [2024-12-06 20:31:31.370919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:14.361 passed 00:04:14.361 Test: mem map adjacent registrations ...passed 00:04:14.361 00:04:14.361 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.361 suites 1 1 n/a 0 0 00:04:14.361 tests 4 4 4 0 0 00:04:14.361 asserts 152 152 152 0 n/a 00:04:14.361 00:04:14.361 Elapsed time = 0.242 seconds 00:04:14.361 00:04:14.361 real 0m0.275s 00:04:14.361 user 0m0.251s 00:04:14.361 sys 0m0.013s 00:04:14.361 20:31:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.361 20:31:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:14.361 ************************************ 00:04:14.361 END TEST env_memory 00:04:14.361 ************************************ 00:04:14.622 20:31:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.622 20:31:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.622 20:31:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.622 20:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.622 ************************************ 00:04:14.622 START TEST env_vtophys 00:04:14.622 ************************************ 00:04:14.622 20:31:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.622 EAL: lib.eal log level changed from notice to debug 00:04:14.622 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 1 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 2 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 3 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 4 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 5 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 6 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 7 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 8 as core 0 on socket 0 00:04:14.622 EAL: Detected lcore 9 as core 0 on socket 0 00:04:14.622 EAL: Maximum logical cores by configuration: 128 00:04:14.622 EAL: Detected CPU lcores: 10 00:04:14.622 EAL: Detected NUMA nodes: 1 00:04:14.622 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.622 EAL: Detected shared linkage of DPDK 00:04:14.622 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.622 EAL: Selected IOVA mode 'PA' 00:04:14.622 EAL: Probing VFIO support... 00:04:14.622 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.622 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:14.622 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.622 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.622 EAL: Setting up physically contiguous memory... 00:04:14.622 EAL: Setting maximum number of open files to 524288 00:04:14.622 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.622 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.622 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.622 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.622 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.622 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.622 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.622 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.622 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.622 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.622 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.622 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.622 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.622 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.622 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.622 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.622 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.622 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.622 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.622 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.622 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.622 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.622 EAL: Hugepages will be freed exactly as allocated. 00:04:14.622 EAL: No shared files mode enabled, IPC is disabled 00:04:14.622 EAL: No shared files mode enabled, IPC is disabled 00:04:14.622 EAL: TSC frequency is ~2600000 KHz 00:04:14.622 EAL: Main lcore 0 is ready (tid=7fc8834fca40;cpuset=[0]) 00:04:14.622 EAL: Trying to obtain current memory policy. 00:04:14.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.622 EAL: Restoring previous memory policy: 0 00:04:14.622 EAL: request: mp_malloc_sync 00:04:14.622 EAL: No shared files mode enabled, IPC is disabled 00:04:14.622 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.622 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.622 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.622 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.622 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:14.622 00:04:14.622 00:04:14.622 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.623 http://cunit.sourceforge.net/ 00:04:14.623 00:04:14.623 00:04:14.623 Suite: components_suite 00:04:14.882 Test: vtophys_malloc_test ...passed 00:04:14.882 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.882 EAL: Restoring previous memory policy: 4 00:04:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.883 EAL: request: mp_malloc_sync 00:04:14.883 EAL: No shared files mode enabled, IPC is disabled 00:04:14.883 EAL: Heap on socket 0 was expanded by 4MB 00:04:14.883 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.883 EAL: request: mp_malloc_sync 00:04:14.883 EAL: No shared files mode enabled, IPC is disabled 00:04:14.883 EAL: Heap on socket 0 was shrunk by 4MB 00:04:14.883 EAL: Trying to obtain current memory policy. 00:04:14.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.883 EAL: Restoring previous memory policy: 4 00:04:14.883 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.883 EAL: request: mp_malloc_sync 00:04:14.883 EAL: No shared files mode enabled, IPC is disabled 00:04:14.883 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.144 EAL: Trying to obtain current memory policy. 00:04:15.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.144 EAL: Restoring previous memory policy: 4 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.144 EAL: Trying to obtain current memory policy. 00:04:15.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.144 EAL: Restoring previous memory policy: 4 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.144 EAL: Trying to obtain current memory policy. 00:04:15.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.144 EAL: Restoring previous memory policy: 4 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.144 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.144 EAL: Trying to obtain current memory policy. 00:04:15.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.144 EAL: Restoring previous memory policy: 4 00:04:15.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.144 EAL: request: mp_malloc_sync 00:04:15.144 EAL: No shared files mode enabled, IPC is disabled 00:04:15.145 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.145 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.145 EAL: request: mp_malloc_sync 00:04:15.145 EAL: No shared files mode enabled, IPC is disabled 00:04:15.145 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.465 EAL: Trying to obtain current memory policy. 00:04:15.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.465 EAL: Restoring previous memory policy: 4 00:04:15.465 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.465 EAL: request: mp_malloc_sync 00:04:15.465 EAL: No shared files mode enabled, IPC is disabled 00:04:15.465 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.465 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.465 EAL: request: mp_malloc_sync 00:04:15.465 EAL: No shared files mode enabled, IPC is disabled 00:04:15.465 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.465 EAL: Trying to obtain current memory policy. 00:04:15.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.725 EAL: Restoring previous memory policy: 4 00:04:15.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.726 EAL: request: mp_malloc_sync 00:04:15.726 EAL: No shared files mode enabled, IPC is disabled 00:04:15.726 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.986 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.986 EAL: request: mp_malloc_sync 00:04:15.986 EAL: No shared files mode enabled, IPC is disabled 00:04:15.986 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.247 EAL: Trying to obtain current memory policy. 00:04:16.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.247 EAL: Restoring previous memory policy: 4 00:04:16.247 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.247 EAL: request: mp_malloc_sync 00:04:16.247 EAL: No shared files mode enabled, IPC is disabled 00:04:16.247 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.821 EAL: request: mp_malloc_sync 00:04:16.821 EAL: No shared files mode enabled, IPC is disabled 00:04:16.821 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.394 EAL: Trying to obtain current memory policy. 00:04:17.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.394 EAL: Restoring previous memory policy: 4 00:04:17.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.394 EAL: request: mp_malloc_sync 00:04:17.394 EAL: No shared files mode enabled, IPC is disabled 00:04:17.394 EAL: Heap on socket 0 was expanded by 1026MB 00:04:18.338 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.599 EAL: request: mp_malloc_sync 00:04:18.599 EAL: No shared files mode enabled, IPC is disabled 00:04:18.599 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.172 passed 00:04:19.172 00:04:19.172 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.172 suites 1 1 n/a 0 0 00:04:19.172 tests 2 2 2 0 0 00:04:19.172 asserts 5796 5796 5796 0 n/a 00:04:19.172 00:04:19.172 Elapsed time = 4.509 seconds 00:04:19.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.172 EAL: request: mp_malloc_sync 00:04:19.172 EAL: No shared files mode enabled, IPC is disabled 00:04:19.172 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.172 EAL: No shared files mode enabled, IPC is disabled 00:04:19.172 EAL: No shared files mode enabled, IPC is disabled 00:04:19.172 EAL: No shared files mode enabled, IPC is disabled 00:04:19.172 00:04:19.172 real 0m4.775s 00:04:19.172 user 0m3.967s 00:04:19.172 sys 0m0.660s 00:04:19.172 ************************************ 00:04:19.172 END TEST env_vtophys 00:04:19.172 ************************************ 00:04:19.172 20:31:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.172 20:31:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.434 20:31:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.434 20:31:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.434 20:31:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.434 20:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.434 ************************************ 00:04:19.434 START TEST env_pci 00:04:19.434 ************************************ 00:04:19.434 20:31:36 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.434 00:04:19.434 00:04:19.434 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.434 http://cunit.sourceforge.net/ 00:04:19.434 00:04:19.434 00:04:19.434 Suite: pci 00:04:19.434 Test: pci_hook ...[2024-12-06 20:31:36.364677] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56989 has claimed it 00:04:19.434 passed 00:04:19.434 00:04:19.434 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.434 suites 1 1 n/a 0 0 00:04:19.434 tests 1 1 1 0 0 00:04:19.434 asserts 25 25 25 0 n/a 00:04:19.434 00:04:19.434 Elapsed time = 0.007 seconds 00:04:19.434 EAL: Cannot find device (10000:00:01.0) 00:04:19.434 EAL: Failed to attach device on primary process 00:04:19.434 ************************************ 00:04:19.434 END TEST env_pci 00:04:19.434 ************************************ 00:04:19.434 00:04:19.434 real 0m0.066s 00:04:19.434 user 0m0.028s 00:04:19.434 sys 0m0.037s 00:04:19.434 20:31:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.434 20:31:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.434 20:31:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.434 20:31:36 env -- env/env.sh@15 -- # uname 00:04:19.434 20:31:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.434 20:31:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.434 20:31:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.434 20:31:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:19.434 20:31:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.434 20:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.434 ************************************ 00:04:19.434 START TEST env_dpdk_post_init 00:04:19.434 ************************************ 00:04:19.434 20:31:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.434 EAL: Detected CPU lcores: 10 00:04:19.434 EAL: Detected NUMA nodes: 1 00:04:19.434 EAL: Detected shared linkage of DPDK 00:04:19.434 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.434 EAL: Selected IOVA mode 'PA' 00:04:19.695 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.695 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:19.695 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:19.695 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:19.695 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:19.695 Starting DPDK initialization... 00:04:19.695 Starting SPDK post initialization... 00:04:19.695 SPDK NVMe probe 00:04:19.695 Attaching to 0000:00:10.0 00:04:19.695 Attaching to 0000:00:11.0 00:04:19.695 Attaching to 0000:00:12.0 00:04:19.695 Attaching to 0000:00:13.0 00:04:19.695 Attached to 0000:00:10.0 00:04:19.695 Attached to 0000:00:11.0 00:04:19.695 Attached to 0000:00:13.0 00:04:19.695 Attached to 0000:00:12.0 00:04:19.695 Cleaning up... 00:04:19.695 00:04:19.695 real 0m0.245s 00:04:19.695 user 0m0.073s 00:04:19.695 sys 0m0.073s 00:04:19.695 ************************************ 00:04:19.695 END TEST env_dpdk_post_init 00:04:19.695 ************************************ 00:04:19.695 20:31:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.695 20:31:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.695 20:31:36 env -- env/env.sh@26 -- # uname 00:04:19.695 20:31:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:19.695 20:31:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.695 20:31:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.695 20:31:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.695 20:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.695 ************************************ 00:04:19.695 START TEST env_mem_callbacks 00:04:19.695 ************************************ 00:04:19.695 20:31:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.695 EAL: Detected CPU lcores: 10 00:04:19.695 EAL: Detected NUMA nodes: 1 00:04:19.695 EAL: Detected shared linkage of DPDK 00:04:19.695 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.695 EAL: Selected IOVA mode 'PA' 00:04:19.957 00:04:19.957 00:04:19.957 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.957 http://cunit.sourceforge.net/ 00:04:19.957 00:04:19.957 00:04:19.957 Suite: memory 00:04:19.957 Test: test ... 00:04:19.957 register 0x200000200000 2097152 00:04:19.957 malloc 3145728 00:04:19.957 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.957 register 0x200000400000 4194304 00:04:19.957 buf 0x2000004fffc0 len 3145728 PASSED 00:04:19.957 malloc 64 00:04:19.957 buf 0x2000004ffec0 len 64 PASSED 00:04:19.957 malloc 4194304 00:04:19.957 register 0x200000800000 6291456 00:04:19.958 buf 0x2000009fffc0 len 4194304 PASSED 00:04:19.958 free 0x2000004fffc0 3145728 00:04:19.958 free 0x2000004ffec0 64 00:04:19.958 unregister 0x200000400000 4194304 PASSED 00:04:19.958 free 0x2000009fffc0 4194304 00:04:19.958 unregister 0x200000800000 6291456 PASSED 00:04:19.958 malloc 8388608 00:04:19.958 register 0x200000400000 10485760 00:04:19.958 buf 0x2000005fffc0 len 8388608 PASSED 00:04:19.958 free 0x2000005fffc0 8388608 00:04:19.958 unregister 0x200000400000 10485760 PASSED 00:04:19.958 passed 00:04:19.958 00:04:19.958 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.958 suites 1 1 n/a 0 0 00:04:19.958 tests 1 1 1 0 0 00:04:19.958 asserts 15 15 15 0 n/a 00:04:19.958 00:04:19.958 Elapsed time = 0.047 seconds 00:04:19.958 00:04:19.958 real 0m0.215s 00:04:19.958 user 0m0.066s 00:04:19.958 sys 0m0.045s 00:04:19.958 20:31:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.958 ************************************ 00:04:19.958 END TEST env_mem_callbacks 00:04:19.958 ************************************ 00:04:19.958 20:31:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:19.958 ************************************ 00:04:19.958 END TEST env 00:04:19.958 ************************************ 00:04:19.958 00:04:19.958 real 0m5.945s 00:04:19.958 user 0m4.557s 00:04:19.958 sys 0m1.000s 00:04:19.958 20:31:37 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.958 20:31:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.958 20:31:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.958 20:31:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.958 20:31:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.958 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:19.958 ************************************ 00:04:19.958 START TEST rpc 00:04:19.958 ************************************ 00:04:19.958 20:31:37 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:20.219 * Looking for test storage... 00:04:20.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.219 20:31:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.219 20:31:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.219 20:31:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.219 20:31:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.219 20:31:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.219 20:31:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.219 20:31:37 rpc -- scripts/common.sh@345 -- # : 1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.219 20:31:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.219 20:31:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.219 20:31:37 rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.219 20:31:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.219 20:31:37 rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.219 20:31:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.219 20:31:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.219 20:31:37 rpc -- scripts/common.sh@368 -- # return 0 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.219 --rc genhtml_branch_coverage=1 00:04:20.219 --rc genhtml_function_coverage=1 00:04:20.219 --rc genhtml_legend=1 00:04:20.219 --rc geninfo_all_blocks=1 00:04:20.219 --rc geninfo_unexecuted_blocks=1 00:04:20.219 00:04:20.219 ' 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.219 --rc genhtml_branch_coverage=1 00:04:20.219 --rc genhtml_function_coverage=1 00:04:20.219 --rc genhtml_legend=1 00:04:20.219 --rc geninfo_all_blocks=1 00:04:20.219 --rc geninfo_unexecuted_blocks=1 00:04:20.219 00:04:20.219 ' 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.219 --rc genhtml_branch_coverage=1 00:04:20.219 --rc genhtml_function_coverage=1 00:04:20.219 --rc genhtml_legend=1 00:04:20.219 --rc geninfo_all_blocks=1 00:04:20.219 --rc geninfo_unexecuted_blocks=1 00:04:20.219 00:04:20.219 ' 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.219 --rc genhtml_branch_coverage=1 00:04:20.219 --rc genhtml_function_coverage=1 00:04:20.219 --rc genhtml_legend=1 00:04:20.219 --rc geninfo_all_blocks=1 00:04:20.219 --rc geninfo_unexecuted_blocks=1 00:04:20.219 00:04:20.219 ' 00:04:20.219 20:31:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57110 00:04:20.219 20:31:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.219 20:31:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57110 00:04:20.219 20:31:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@835 -- # '[' -z 57110 ']' 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.219 20:31:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.219 [2024-12-06 20:31:37.256034] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:20.220 [2024-12-06 20:31:37.256270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57110 ] 00:04:20.480 [2024-12-06 20:31:37.417769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.480 [2024-12-06 20:31:37.516427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:20.480 [2024-12-06 20:31:37.516620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57110' to capture a snapshot of events at runtime. 00:04:20.480 [2024-12-06 20:31:37.516636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:20.480 [2024-12-06 20:31:37.516646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:20.480 [2024-12-06 20:31:37.516653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57110 for offline analysis/debug. 00:04:20.480 [2024-12-06 20:31:37.517522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.052 20:31:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.052 20:31:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:21.052 20:31:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.052 20:31:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.052 20:31:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:21.052 20:31:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:21.052 20:31:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.052 20:31:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.052 20:31:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.052 ************************************ 00:04:21.052 START TEST rpc_integrity 00:04:21.052 ************************************ 00:04:21.052 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.052 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.052 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.053 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.053 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.053 { 00:04:21.053 "name": "Malloc0", 00:04:21.053 "aliases": [ 00:04:21.053 "9f807d31-ca62-4f81-aa8a-9c24a3128b7f" 00:04:21.053 ], 00:04:21.053 "product_name": "Malloc disk", 00:04:21.053 "block_size": 512, 00:04:21.053 "num_blocks": 16384, 00:04:21.053 "uuid": "9f807d31-ca62-4f81-aa8a-9c24a3128b7f", 00:04:21.053 "assigned_rate_limits": { 00:04:21.053 "rw_ios_per_sec": 0, 00:04:21.053 "rw_mbytes_per_sec": 0, 00:04:21.053 "r_mbytes_per_sec": 0, 00:04:21.053 "w_mbytes_per_sec": 0 00:04:21.053 }, 00:04:21.053 "claimed": false, 00:04:21.053 "zoned": false, 00:04:21.053 "supported_io_types": { 00:04:21.053 "read": true, 00:04:21.053 "write": true, 00:04:21.053 "unmap": true, 00:04:21.053 "flush": true, 00:04:21.053 "reset": true, 00:04:21.053 "nvme_admin": false, 00:04:21.053 "nvme_io": false, 00:04:21.053 "nvme_io_md": false, 00:04:21.053 "write_zeroes": true, 00:04:21.053 "zcopy": true, 00:04:21.053 "get_zone_info": false, 00:04:21.053 "zone_management": false, 00:04:21.053 "zone_append": false, 00:04:21.053 "compare": false, 00:04:21.053 "compare_and_write": false, 00:04:21.053 "abort": true, 00:04:21.053 "seek_hole": false, 00:04:21.053 "seek_data": false, 00:04:21.053 "copy": true, 00:04:21.053 "nvme_iov_md": false 00:04:21.053 }, 00:04:21.053 "memory_domains": [ 00:04:21.053 { 00:04:21.053 "dma_device_id": "system", 00:04:21.053 "dma_device_type": 1 00:04:21.053 }, 00:04:21.053 { 00:04:21.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.053 "dma_device_type": 2 00:04:21.053 } 00:04:21.053 ], 00:04:21.053 "driver_specific": {} 00:04:21.053 } 00:04:21.053 ]' 00:04:21.314 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.314 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.314 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:21.314 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.314 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.314 [2024-12-06 20:31:38.214946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:21.314 [2024-12-06 20:31:38.214998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.314 [2024-12-06 20:31:38.215020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:21.314 [2024-12-06 20:31:38.215031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.314 [2024-12-06 20:31:38.217184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.314 [2024-12-06 20:31:38.217326] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.314 Passthru0 00:04:21.314 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.314 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.314 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.314 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.314 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.314 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.314 { 00:04:21.314 "name": "Malloc0", 00:04:21.314 "aliases": [ 00:04:21.314 "9f807d31-ca62-4f81-aa8a-9c24a3128b7f" 00:04:21.314 ], 00:04:21.314 "product_name": "Malloc disk", 00:04:21.314 "block_size": 512, 00:04:21.314 "num_blocks": 16384, 00:04:21.314 "uuid": "9f807d31-ca62-4f81-aa8a-9c24a3128b7f", 00:04:21.314 "assigned_rate_limits": { 00:04:21.314 "rw_ios_per_sec": 0, 00:04:21.314 "rw_mbytes_per_sec": 0, 00:04:21.314 "r_mbytes_per_sec": 0, 00:04:21.314 "w_mbytes_per_sec": 0 00:04:21.314 }, 00:04:21.314 "claimed": true, 00:04:21.314 "claim_type": "exclusive_write", 00:04:21.314 "zoned": false, 00:04:21.314 "supported_io_types": { 00:04:21.314 "read": true, 00:04:21.314 "write": true, 00:04:21.314 "unmap": true, 00:04:21.314 "flush": true, 00:04:21.314 "reset": true, 00:04:21.314 "nvme_admin": false, 00:04:21.314 "nvme_io": false, 00:04:21.314 "nvme_io_md": false, 00:04:21.314 "write_zeroes": true, 00:04:21.314 "zcopy": true, 00:04:21.314 "get_zone_info": false, 00:04:21.314 "zone_management": false, 00:04:21.314 "zone_append": false, 00:04:21.314 "compare": false, 00:04:21.314 "compare_and_write": false, 00:04:21.314 "abort": true, 00:04:21.314 "seek_hole": false, 00:04:21.314 "seek_data": false, 00:04:21.314 "copy": true, 00:04:21.314 "nvme_iov_md": false 00:04:21.314 }, 00:04:21.314 "memory_domains": [ 00:04:21.314 { 00:04:21.314 "dma_device_id": "system", 00:04:21.314 "dma_device_type": 1 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.315 "dma_device_type": 2 00:04:21.315 } 00:04:21.315 ], 00:04:21.315 "driver_specific": {} 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "name": "Passthru0", 00:04:21.315 "aliases": [ 00:04:21.315 "3850ed8e-161f-5454-a5b9-b4ed611c0e10" 00:04:21.315 ], 00:04:21.315 "product_name": "passthru", 00:04:21.315 "block_size": 512, 00:04:21.315 "num_blocks": 16384, 00:04:21.315 "uuid": "3850ed8e-161f-5454-a5b9-b4ed611c0e10", 00:04:21.315 "assigned_rate_limits": { 00:04:21.315 "rw_ios_per_sec": 0, 00:04:21.315 "rw_mbytes_per_sec": 0, 00:04:21.315 "r_mbytes_per_sec": 0, 00:04:21.315 "w_mbytes_per_sec": 0 00:04:21.315 }, 00:04:21.315 "claimed": false, 00:04:21.315 "zoned": false, 00:04:21.315 "supported_io_types": { 00:04:21.315 "read": true, 00:04:21.315 "write": true, 00:04:21.315 "unmap": true, 00:04:21.315 "flush": true, 00:04:21.315 "reset": true, 00:04:21.315 "nvme_admin": false, 00:04:21.315 "nvme_io": false, 00:04:21.315 "nvme_io_md": false, 00:04:21.315 "write_zeroes": true, 00:04:21.315 "zcopy": true, 00:04:21.315 "get_zone_info": false, 00:04:21.315 "zone_management": false, 00:04:21.315 "zone_append": false, 00:04:21.315 "compare": false, 00:04:21.315 "compare_and_write": false, 00:04:21.315 "abort": true, 00:04:21.315 "seek_hole": false, 00:04:21.315 "seek_data": false, 00:04:21.315 "copy": true, 00:04:21.315 "nvme_iov_md": false 00:04:21.315 }, 00:04:21.315 "memory_domains": [ 00:04:21.315 { 00:04:21.315 "dma_device_id": "system", 00:04:21.315 "dma_device_type": 1 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.315 "dma_device_type": 2 00:04:21.315 } 00:04:21.315 ], 00:04:21.315 "driver_specific": { 00:04:21.315 "passthru": { 00:04:21.315 "name": "Passthru0", 00:04:21.315 "base_bdev_name": "Malloc0" 00:04:21.315 } 00:04:21.315 } 00:04:21.315 } 00:04:21.315 ]' 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.315 ************************************ 00:04:21.315 END TEST rpc_integrity 00:04:21.315 ************************************ 00:04:21.315 20:31:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.315 00:04:21.315 real 0m0.241s 00:04:21.315 user 0m0.129s 00:04:21.315 sys 0m0.026s 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.315 20:31:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 20:31:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:21.315 20:31:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.315 20:31:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.315 20:31:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 ************************************ 00:04:21.315 START TEST rpc_plugins 00:04:21.315 ************************************ 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:21.315 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.315 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:21.315 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.315 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:21.315 { 00:04:21.315 "name": "Malloc1", 00:04:21.315 "aliases": [ 00:04:21.315 "3a73bb80-9377-415a-b8b1-1f5b3c847c8f" 00:04:21.315 ], 00:04:21.315 "product_name": "Malloc disk", 00:04:21.315 "block_size": 4096, 00:04:21.315 "num_blocks": 256, 00:04:21.315 "uuid": "3a73bb80-9377-415a-b8b1-1f5b3c847c8f", 00:04:21.315 "assigned_rate_limits": { 00:04:21.315 "rw_ios_per_sec": 0, 00:04:21.315 "rw_mbytes_per_sec": 0, 00:04:21.315 "r_mbytes_per_sec": 0, 00:04:21.315 "w_mbytes_per_sec": 0 00:04:21.315 }, 00:04:21.315 "claimed": false, 00:04:21.315 "zoned": false, 00:04:21.315 "supported_io_types": { 00:04:21.315 "read": true, 00:04:21.315 "write": true, 00:04:21.315 "unmap": true, 00:04:21.315 "flush": true, 00:04:21.315 "reset": true, 00:04:21.315 "nvme_admin": false, 00:04:21.315 "nvme_io": false, 00:04:21.315 "nvme_io_md": false, 00:04:21.315 "write_zeroes": true, 00:04:21.315 "zcopy": true, 00:04:21.315 "get_zone_info": false, 00:04:21.315 "zone_management": false, 00:04:21.315 "zone_append": false, 00:04:21.315 "compare": false, 00:04:21.315 "compare_and_write": false, 00:04:21.315 "abort": true, 00:04:21.315 "seek_hole": false, 00:04:21.315 "seek_data": false, 00:04:21.315 "copy": true, 00:04:21.315 "nvme_iov_md": false 00:04:21.315 }, 00:04:21.315 "memory_domains": [ 00:04:21.315 { 00:04:21.315 "dma_device_id": "system", 00:04:21.315 "dma_device_type": 1 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.315 "dma_device_type": 2 00:04:21.315 } 00:04:21.315 ], 00:04:21.315 "driver_specific": {} 00:04:21.315 } 00:04:21.315 ]' 00:04:21.315 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:21.577 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:21.578 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.578 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.578 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.578 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.578 ************************************ 00:04:21.578 END TEST rpc_plugins 00:04:21.578 ************************************ 00:04:21.578 20:31:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.578 00:04:21.578 real 0m0.124s 00:04:21.578 user 0m0.069s 00:04:21.578 sys 0m0.016s 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.578 20:31:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.578 20:31:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.578 20:31:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.578 20:31:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.578 20:31:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.578 ************************************ 00:04:21.578 START TEST rpc_trace_cmd_test 00:04:21.578 ************************************ 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.578 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57110", 00:04:21.578 "tpoint_group_mask": "0x8", 00:04:21.578 "iscsi_conn": { 00:04:21.578 "mask": "0x2", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "scsi": { 00:04:21.578 "mask": "0x4", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "bdev": { 00:04:21.578 "mask": "0x8", 00:04:21.578 "tpoint_mask": "0xffffffffffffffff" 00:04:21.578 }, 00:04:21.578 "nvmf_rdma": { 00:04:21.578 "mask": "0x10", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "nvmf_tcp": { 00:04:21.578 "mask": "0x20", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "ftl": { 00:04:21.578 "mask": "0x40", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "blobfs": { 00:04:21.578 "mask": "0x80", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "dsa": { 00:04:21.578 "mask": "0x200", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "thread": { 00:04:21.578 "mask": "0x400", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "nvme_pcie": { 00:04:21.578 "mask": "0x800", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "iaa": { 00:04:21.578 "mask": "0x1000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "nvme_tcp": { 00:04:21.578 "mask": "0x2000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "bdev_nvme": { 00:04:21.578 "mask": "0x4000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "sock": { 00:04:21.578 "mask": "0x8000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "blob": { 00:04:21.578 "mask": "0x10000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "bdev_raid": { 00:04:21.578 "mask": "0x20000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 }, 00:04:21.578 "scheduler": { 00:04:21.578 "mask": "0x40000", 00:04:21.578 "tpoint_mask": "0x0" 00:04:21.578 } 00:04:21.578 }' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.578 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.840 ************************************ 00:04:21.840 END TEST rpc_trace_cmd_test 00:04:21.840 ************************************ 00:04:21.840 20:31:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.840 00:04:21.840 real 0m0.174s 00:04:21.840 user 0m0.148s 00:04:21.840 sys 0m0.017s 00:04:21.840 20:31:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.840 20:31:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 20:31:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.841 20:31:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.841 20:31:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.841 20:31:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.841 20:31:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.841 20:31:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 ************************************ 00:04:21.841 START TEST rpc_daemon_integrity 00:04:21.841 ************************************ 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.841 { 00:04:21.841 "name": "Malloc2", 00:04:21.841 "aliases": [ 00:04:21.841 "369c1566-7338-4700-84ac-30a900738797" 00:04:21.841 ], 00:04:21.841 "product_name": "Malloc disk", 00:04:21.841 "block_size": 512, 00:04:21.841 "num_blocks": 16384, 00:04:21.841 "uuid": "369c1566-7338-4700-84ac-30a900738797", 00:04:21.841 "assigned_rate_limits": { 00:04:21.841 "rw_ios_per_sec": 0, 00:04:21.841 "rw_mbytes_per_sec": 0, 00:04:21.841 "r_mbytes_per_sec": 0, 00:04:21.841 "w_mbytes_per_sec": 0 00:04:21.841 }, 00:04:21.841 "claimed": false, 00:04:21.841 "zoned": false, 00:04:21.841 "supported_io_types": { 00:04:21.841 "read": true, 00:04:21.841 "write": true, 00:04:21.841 "unmap": true, 00:04:21.841 "flush": true, 00:04:21.841 "reset": true, 00:04:21.841 "nvme_admin": false, 00:04:21.841 "nvme_io": false, 00:04:21.841 "nvme_io_md": false, 00:04:21.841 "write_zeroes": true, 00:04:21.841 "zcopy": true, 00:04:21.841 "get_zone_info": false, 00:04:21.841 "zone_management": false, 00:04:21.841 "zone_append": false, 00:04:21.841 "compare": false, 00:04:21.841 "compare_and_write": false, 00:04:21.841 "abort": true, 00:04:21.841 "seek_hole": false, 00:04:21.841 "seek_data": false, 00:04:21.841 "copy": true, 00:04:21.841 "nvme_iov_md": false 00:04:21.841 }, 00:04:21.841 "memory_domains": [ 00:04:21.841 { 00:04:21.841 "dma_device_id": "system", 00:04:21.841 "dma_device_type": 1 00:04:21.841 }, 00:04:21.841 { 00:04:21.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.841 "dma_device_type": 2 00:04:21.841 } 00:04:21.841 ], 00:04:21.841 "driver_specific": {} 00:04:21.841 } 00:04:21.841 ]' 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 [2024-12-06 20:31:38.865202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.841 [2024-12-06 20:31:38.865273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.841 [2024-12-06 20:31:38.865293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:21.841 [2024-12-06 20:31:38.865304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.841 [2024-12-06 20:31:38.867475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.841 [2024-12-06 20:31:38.867520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.841 Passthru0 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.841 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.841 { 00:04:21.841 "name": "Malloc2", 00:04:21.841 "aliases": [ 00:04:21.841 "369c1566-7338-4700-84ac-30a900738797" 00:04:21.841 ], 00:04:21.841 "product_name": "Malloc disk", 00:04:21.841 "block_size": 512, 00:04:21.841 "num_blocks": 16384, 00:04:21.841 "uuid": "369c1566-7338-4700-84ac-30a900738797", 00:04:21.841 "assigned_rate_limits": { 00:04:21.841 "rw_ios_per_sec": 0, 00:04:21.841 "rw_mbytes_per_sec": 0, 00:04:21.841 "r_mbytes_per_sec": 0, 00:04:21.841 "w_mbytes_per_sec": 0 00:04:21.841 }, 00:04:21.841 "claimed": true, 00:04:21.841 "claim_type": "exclusive_write", 00:04:21.841 "zoned": false, 00:04:21.841 "supported_io_types": { 00:04:21.841 "read": true, 00:04:21.841 "write": true, 00:04:21.841 "unmap": true, 00:04:21.841 "flush": true, 00:04:21.841 "reset": true, 00:04:21.841 "nvme_admin": false, 00:04:21.841 "nvme_io": false, 00:04:21.841 "nvme_io_md": false, 00:04:21.841 "write_zeroes": true, 00:04:21.841 "zcopy": true, 00:04:21.841 "get_zone_info": false, 00:04:21.841 "zone_management": false, 00:04:21.841 "zone_append": false, 00:04:21.841 "compare": false, 00:04:21.841 "compare_and_write": false, 00:04:21.841 "abort": true, 00:04:21.841 "seek_hole": false, 00:04:21.841 "seek_data": false, 00:04:21.841 "copy": true, 00:04:21.841 "nvme_iov_md": false 00:04:21.841 }, 00:04:21.841 "memory_domains": [ 00:04:21.841 { 00:04:21.841 "dma_device_id": "system", 00:04:21.841 "dma_device_type": 1 00:04:21.841 }, 00:04:21.841 { 00:04:21.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.841 "dma_device_type": 2 00:04:21.841 } 00:04:21.841 ], 00:04:21.841 "driver_specific": {} 00:04:21.841 }, 00:04:21.841 { 00:04:21.841 "name": "Passthru0", 00:04:21.841 "aliases": [ 00:04:21.841 "929ec8a8-4986-51a8-b2f6-c8abea20da1c" 00:04:21.841 ], 00:04:21.841 "product_name": "passthru", 00:04:21.841 "block_size": 512, 00:04:21.841 "num_blocks": 16384, 00:04:21.841 "uuid": "929ec8a8-4986-51a8-b2f6-c8abea20da1c", 00:04:21.841 "assigned_rate_limits": { 00:04:21.841 "rw_ios_per_sec": 0, 00:04:21.841 "rw_mbytes_per_sec": 0, 00:04:21.841 "r_mbytes_per_sec": 0, 00:04:21.841 "w_mbytes_per_sec": 0 00:04:21.841 }, 00:04:21.841 "claimed": false, 00:04:21.841 "zoned": false, 00:04:21.841 "supported_io_types": { 00:04:21.841 "read": true, 00:04:21.841 "write": true, 00:04:21.841 "unmap": true, 00:04:21.841 "flush": true, 00:04:21.841 "reset": true, 00:04:21.841 "nvme_admin": false, 00:04:21.841 "nvme_io": false, 00:04:21.841 "nvme_io_md": false, 00:04:21.841 "write_zeroes": true, 00:04:21.841 "zcopy": true, 00:04:21.841 "get_zone_info": false, 00:04:21.841 "zone_management": false, 00:04:21.841 "zone_append": false, 00:04:21.841 "compare": false, 00:04:21.841 "compare_and_write": false, 00:04:21.841 "abort": true, 00:04:21.841 "seek_hole": false, 00:04:21.841 "seek_data": false, 00:04:21.841 "copy": true, 00:04:21.841 "nvme_iov_md": false 00:04:21.841 }, 00:04:21.841 "memory_domains": [ 00:04:21.841 { 00:04:21.841 "dma_device_id": "system", 00:04:21.841 "dma_device_type": 1 00:04:21.841 }, 00:04:21.841 { 00:04:21.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.841 "dma_device_type": 2 00:04:21.841 } 00:04:21.841 ], 00:04:21.841 "driver_specific": { 00:04:21.841 "passthru": { 00:04:21.841 "name": "Passthru0", 00:04:21.841 "base_bdev_name": "Malloc2" 00:04:21.842 } 00:04:21.842 } 00:04:21.842 } 00:04:21.842 ]' 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.842 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.104 ************************************ 00:04:22.104 END TEST rpc_daemon_integrity 00:04:22.104 ************************************ 00:04:22.104 20:31:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.104 00:04:22.104 real 0m0.239s 00:04:22.104 user 0m0.126s 00:04:22.104 sys 0m0.034s 00:04:22.104 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.104 20:31:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 20:31:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:22.104 20:31:39 rpc -- rpc/rpc.sh@84 -- # killprocess 57110 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 57110 ']' 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@958 -- # kill -0 57110 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57110 00:04:22.104 killing process with pid 57110 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57110' 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@973 -- # kill 57110 00:04:22.104 20:31:39 rpc -- common/autotest_common.sh@978 -- # wait 57110 00:04:23.487 00:04:23.487 real 0m3.392s 00:04:23.487 user 0m3.851s 00:04:23.487 sys 0m0.540s 00:04:23.487 20:31:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.487 ************************************ 00:04:23.487 END TEST rpc 00:04:23.487 ************************************ 00:04:23.487 20:31:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 20:31:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.487 20:31:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.487 20:31:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.487 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 ************************************ 00:04:23.487 START TEST skip_rpc 00:04:23.487 ************************************ 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.487 * Looking for test storage... 00:04:23.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.487 20:31:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 20:31:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.487 20:31:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.487 20:31:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.487 20:31:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.748 ************************************ 00:04:23.748 START TEST skip_rpc 00:04:23.748 ************************************ 00:04:23.748 20:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.748 20:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57325 00:04:23.748 20:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.748 20:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.748 20:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.748 [2024-12-06 20:31:40.679160] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:23.748 [2024-12-06 20:31:40.679254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57325 ] 00:04:23.748 [2024-12-06 20:31:40.829589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.051 [2024-12-06 20:31:40.905419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57325 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57325 ']' 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57325 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57325 00:04:29.337 killing process with pid 57325 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57325' 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57325 00:04:29.337 20:31:45 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57325 00:04:29.909 00:04:29.909 real 0m6.208s 00:04:29.909 user 0m5.851s 00:04:29.909 sys 0m0.254s 00:04:29.909 20:31:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.909 20:31:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.909 ************************************ 00:04:29.909 END TEST skip_rpc 00:04:29.909 ************************************ 00:04:29.909 20:31:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.909 20:31:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.909 20:31:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.909 20:31:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.909 ************************************ 00:04:29.909 START TEST skip_rpc_with_json 00:04:29.909 ************************************ 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57421 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57421 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57421 ']' 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.909 20:31:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.909 [2024-12-06 20:31:46.947529] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:29.909 [2024-12-06 20:31:46.947775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57421 ] 00:04:30.171 [2024-12-06 20:31:47.098140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.171 [2024-12-06 20:31:47.177713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.745 [2024-12-06 20:31:47.785828] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.745 request: 00:04:30.745 { 00:04:30.745 "trtype": "tcp", 00:04:30.745 "method": "nvmf_get_transports", 00:04:30.745 "req_id": 1 00:04:30.745 } 00:04:30.745 Got JSON-RPC error response 00:04:30.745 response: 00:04:30.745 { 00:04:30.745 "code": -19, 00:04:30.745 "message": "No such device" 00:04:30.745 } 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.745 [2024-12-06 20:31:47.797920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.745 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.007 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.007 20:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.007 { 00:04:31.007 "subsystems": [ 00:04:31.007 { 00:04:31.007 "subsystem": "fsdev", 00:04:31.007 "config": [ 00:04:31.007 { 00:04:31.007 "method": "fsdev_set_opts", 00:04:31.007 "params": { 00:04:31.007 "fsdev_io_pool_size": 65535, 00:04:31.007 "fsdev_io_cache_size": 256 00:04:31.007 } 00:04:31.007 } 00:04:31.007 ] 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "subsystem": "keyring", 00:04:31.007 "config": [] 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "subsystem": "iobuf", 00:04:31.007 "config": [ 00:04:31.007 { 00:04:31.007 "method": "iobuf_set_options", 00:04:31.007 "params": { 00:04:31.007 "small_pool_count": 8192, 00:04:31.007 "large_pool_count": 1024, 00:04:31.007 "small_bufsize": 8192, 00:04:31.007 "large_bufsize": 135168, 00:04:31.007 "enable_numa": false 00:04:31.007 } 00:04:31.007 } 00:04:31.007 ] 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "subsystem": "sock", 00:04:31.007 "config": [ 00:04:31.007 { 00:04:31.007 "method": "sock_set_default_impl", 00:04:31.007 "params": { 00:04:31.007 "impl_name": "posix" 00:04:31.007 } 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "method": "sock_impl_set_options", 00:04:31.007 "params": { 00:04:31.007 "impl_name": "ssl", 00:04:31.007 "recv_buf_size": 4096, 00:04:31.007 "send_buf_size": 4096, 00:04:31.007 "enable_recv_pipe": true, 00:04:31.007 "enable_quickack": false, 00:04:31.007 "enable_placement_id": 0, 00:04:31.007 "enable_zerocopy_send_server": true, 00:04:31.007 "enable_zerocopy_send_client": false, 00:04:31.007 "zerocopy_threshold": 0, 00:04:31.007 "tls_version": 0, 00:04:31.007 "enable_ktls": false 00:04:31.007 } 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "method": "sock_impl_set_options", 00:04:31.007 "params": { 00:04:31.007 "impl_name": "posix", 00:04:31.007 "recv_buf_size": 2097152, 00:04:31.007 "send_buf_size": 2097152, 00:04:31.007 "enable_recv_pipe": true, 00:04:31.007 "enable_quickack": false, 00:04:31.007 "enable_placement_id": 0, 00:04:31.007 "enable_zerocopy_send_server": true, 00:04:31.007 "enable_zerocopy_send_client": false, 00:04:31.007 "zerocopy_threshold": 0, 00:04:31.007 "tls_version": 0, 00:04:31.007 "enable_ktls": false 00:04:31.007 } 00:04:31.007 } 00:04:31.007 ] 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "subsystem": "vmd", 00:04:31.007 "config": [] 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "subsystem": "accel", 00:04:31.007 "config": [ 00:04:31.007 { 00:04:31.007 "method": "accel_set_options", 00:04:31.007 "params": { 00:04:31.007 "small_cache_size": 128, 00:04:31.007 "large_cache_size": 16, 00:04:31.007 "task_count": 2048, 00:04:31.007 "sequence_count": 2048, 00:04:31.007 "buf_count": 2048 00:04:31.007 } 00:04:31.007 } 00:04:31.007 ] 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "subsystem": "bdev", 00:04:31.007 "config": [ 00:04:31.007 { 00:04:31.007 "method": "bdev_set_options", 00:04:31.007 "params": { 00:04:31.007 "bdev_io_pool_size": 65535, 00:04:31.007 "bdev_io_cache_size": 256, 00:04:31.007 "bdev_auto_examine": true, 00:04:31.007 "iobuf_small_cache_size": 128, 00:04:31.007 "iobuf_large_cache_size": 16 00:04:31.007 } 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "method": "bdev_raid_set_options", 00:04:31.007 "params": { 00:04:31.007 "process_window_size_kb": 1024, 00:04:31.007 "process_max_bandwidth_mb_sec": 0 00:04:31.007 } 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "method": "bdev_iscsi_set_options", 00:04:31.007 "params": { 00:04:31.007 "timeout_sec": 30 00:04:31.007 } 00:04:31.007 }, 00:04:31.007 { 00:04:31.007 "method": "bdev_nvme_set_options", 00:04:31.007 "params": { 00:04:31.007 "action_on_timeout": "none", 00:04:31.007 "timeout_us": 0, 00:04:31.007 "timeout_admin_us": 0, 00:04:31.007 "keep_alive_timeout_ms": 10000, 00:04:31.007 "arbitration_burst": 0, 00:04:31.007 "low_priority_weight": 0, 00:04:31.007 "medium_priority_weight": 0, 00:04:31.007 "high_priority_weight": 0, 00:04:31.007 "nvme_adminq_poll_period_us": 10000, 00:04:31.007 "nvme_ioq_poll_period_us": 0, 00:04:31.008 "io_queue_requests": 0, 00:04:31.008 "delay_cmd_submit": true, 00:04:31.008 "transport_retry_count": 4, 00:04:31.008 "bdev_retry_count": 3, 00:04:31.008 "transport_ack_timeout": 0, 00:04:31.008 "ctrlr_loss_timeout_sec": 0, 00:04:31.008 "reconnect_delay_sec": 0, 00:04:31.008 "fast_io_fail_timeout_sec": 0, 00:04:31.008 "disable_auto_failback": false, 00:04:31.008 "generate_uuids": false, 00:04:31.008 "transport_tos": 0, 00:04:31.008 "nvme_error_stat": false, 00:04:31.008 "rdma_srq_size": 0, 00:04:31.008 "io_path_stat": false, 00:04:31.008 "allow_accel_sequence": false, 00:04:31.008 "rdma_max_cq_size": 0, 00:04:31.008 "rdma_cm_event_timeout_ms": 0, 00:04:31.008 "dhchap_digests": [ 00:04:31.008 "sha256", 00:04:31.008 "sha384", 00:04:31.008 "sha512" 00:04:31.008 ], 00:04:31.008 "dhchap_dhgroups": [ 00:04:31.008 "null", 00:04:31.008 "ffdhe2048", 00:04:31.008 "ffdhe3072", 00:04:31.008 "ffdhe4096", 00:04:31.008 "ffdhe6144", 00:04:31.008 "ffdhe8192" 00:04:31.008 ] 00:04:31.008 } 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "method": "bdev_nvme_set_hotplug", 00:04:31.008 "params": { 00:04:31.008 "period_us": 100000, 00:04:31.008 "enable": false 00:04:31.008 } 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "method": "bdev_wait_for_examine" 00:04:31.008 } 00:04:31.008 ] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "scsi", 00:04:31.008 "config": null 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "scheduler", 00:04:31.008 "config": [ 00:04:31.008 { 00:04:31.008 "method": "framework_set_scheduler", 00:04:31.008 "params": { 00:04:31.008 "name": "static" 00:04:31.008 } 00:04:31.008 } 00:04:31.008 ] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "vhost_scsi", 00:04:31.008 "config": [] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "vhost_blk", 00:04:31.008 "config": [] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "ublk", 00:04:31.008 "config": [] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "nbd", 00:04:31.008 "config": [] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "nvmf", 00:04:31.008 "config": [ 00:04:31.008 { 00:04:31.008 "method": "nvmf_set_config", 00:04:31.008 "params": { 00:04:31.008 "discovery_filter": "match_any", 00:04:31.008 "admin_cmd_passthru": { 00:04:31.008 "identify_ctrlr": false 00:04:31.008 }, 00:04:31.008 "dhchap_digests": [ 00:04:31.008 "sha256", 00:04:31.008 "sha384", 00:04:31.008 "sha512" 00:04:31.008 ], 00:04:31.008 "dhchap_dhgroups": [ 00:04:31.008 "null", 00:04:31.008 "ffdhe2048", 00:04:31.008 "ffdhe3072", 00:04:31.008 "ffdhe4096", 00:04:31.008 "ffdhe6144", 00:04:31.008 "ffdhe8192" 00:04:31.008 ] 00:04:31.008 } 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "method": "nvmf_set_max_subsystems", 00:04:31.008 "params": { 00:04:31.008 "max_subsystems": 1024 00:04:31.008 } 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "method": "nvmf_set_crdt", 00:04:31.008 "params": { 00:04:31.008 "crdt1": 0, 00:04:31.008 "crdt2": 0, 00:04:31.008 "crdt3": 0 00:04:31.008 } 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "method": "nvmf_create_transport", 00:04:31.008 "params": { 00:04:31.008 "trtype": "TCP", 00:04:31.008 "max_queue_depth": 128, 00:04:31.008 "max_io_qpairs_per_ctrlr": 127, 00:04:31.008 "in_capsule_data_size": 4096, 00:04:31.008 "max_io_size": 131072, 00:04:31.008 "io_unit_size": 131072, 00:04:31.008 "max_aq_depth": 128, 00:04:31.008 "num_shared_buffers": 511, 00:04:31.008 "buf_cache_size": 4294967295, 00:04:31.008 "dif_insert_or_strip": false, 00:04:31.008 "zcopy": false, 00:04:31.008 "c2h_success": true, 00:04:31.008 "sock_priority": 0, 00:04:31.008 "abort_timeout_sec": 1, 00:04:31.008 "ack_timeout": 0, 00:04:31.008 "data_wr_pool_size": 0 00:04:31.008 } 00:04:31.008 } 00:04:31.008 ] 00:04:31.008 }, 00:04:31.008 { 00:04:31.008 "subsystem": "iscsi", 00:04:31.008 "config": [ 00:04:31.008 { 00:04:31.008 "method": "iscsi_set_options", 00:04:31.008 "params": { 00:04:31.008 "node_base": "iqn.2016-06.io.spdk", 00:04:31.008 "max_sessions": 128, 00:04:31.008 "max_connections_per_session": 2, 00:04:31.008 "max_queue_depth": 64, 00:04:31.008 "default_time2wait": 2, 00:04:31.008 "default_time2retain": 20, 00:04:31.008 "first_burst_length": 8192, 00:04:31.008 "immediate_data": true, 00:04:31.008 "allow_duplicated_isid": false, 00:04:31.008 "error_recovery_level": 0, 00:04:31.008 "nop_timeout": 60, 00:04:31.008 "nop_in_interval": 30, 00:04:31.008 "disable_chap": false, 00:04:31.008 "require_chap": false, 00:04:31.008 "mutual_chap": false, 00:04:31.008 "chap_group": 0, 00:04:31.008 "max_large_datain_per_connection": 64, 00:04:31.008 "max_r2t_per_connection": 4, 00:04:31.008 "pdu_pool_size": 36864, 00:04:31.008 "immediate_data_pool_size": 16384, 00:04:31.008 "data_out_pool_size": 2048 00:04:31.008 } 00:04:31.008 } 00:04:31.008 ] 00:04:31.008 } 00:04:31.008 ] 00:04:31.008 } 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57421 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57421 ']' 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57421 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57421 00:04:31.008 killing process with pid 57421 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57421' 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57421 00:04:31.008 20:31:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57421 00:04:32.396 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57455 00:04:32.396 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.396 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57455 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57455 ']' 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57455 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57455 00:04:37.675 killing process with pid 57455 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57455' 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57455 00:04:37.675 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57455 00:04:38.613 20:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.614 ************************************ 00:04:38.614 END TEST skip_rpc_with_json 00:04:38.614 ************************************ 00:04:38.614 00:04:38.614 real 0m8.503s 00:04:38.614 user 0m8.149s 00:04:38.614 sys 0m0.559s 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 20:31:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.614 20:31:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.614 20:31:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.614 20:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 ************************************ 00:04:38.614 START TEST skip_rpc_with_delay 00:04:38.614 ************************************ 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.614 [2024-12-06 20:31:55.524375] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.614 00:04:38.614 real 0m0.128s 00:04:38.614 user 0m0.073s 00:04:38.614 sys 0m0.054s 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.614 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 ************************************ 00:04:38.614 END TEST skip_rpc_with_delay 00:04:38.614 ************************************ 00:04:38.614 20:31:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.614 20:31:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.614 20:31:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.614 20:31:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.614 20:31:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.614 20:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 ************************************ 00:04:38.614 START TEST exit_on_failed_rpc_init 00:04:38.614 ************************************ 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:38.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57577 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57577 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57577 ']' 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.614 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 [2024-12-06 20:31:55.700817] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:38.614 [2024-12-06 20:31:55.701097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57577 ] 00:04:38.872 [2024-12-06 20:31:55.855620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.872 [2024-12-06 20:31:55.938575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.443 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.705 [2024-12-06 20:31:56.604588] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:39.705 [2024-12-06 20:31:56.604701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57590 ] 00:04:39.705 [2024-12-06 20:31:56.765997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.967 [2024-12-06 20:31:56.862315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.967 [2024-12-06 20:31:56.862395] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.967 [2024-12-06 20:31:56.862408] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.967 [2024-12-06 20:31:56.862421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57577 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57577 ']' 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57577 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57577 00:04:39.967 killing process with pid 57577 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57577' 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57577 00:04:39.967 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57577 00:04:41.355 ************************************ 00:04:41.355 END TEST exit_on_failed_rpc_init 00:04:41.355 ************************************ 00:04:41.355 00:04:41.355 real 0m2.622s 00:04:41.355 user 0m2.930s 00:04:41.355 sys 0m0.400s 00:04:41.355 20:31:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.355 20:31:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.355 20:31:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.355 ************************************ 00:04:41.355 END TEST skip_rpc 00:04:41.355 ************************************ 00:04:41.355 00:04:41.355 real 0m17.815s 00:04:41.355 user 0m17.139s 00:04:41.355 sys 0m1.447s 00:04:41.355 20:31:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.355 20:31:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.355 20:31:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.355 20:31:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.355 20:31:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.355 20:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:41.355 ************************************ 00:04:41.355 START TEST rpc_client 00:04:41.355 ************************************ 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.355 * Looking for test storage... 00:04:41.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.355 20:31:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.355 --rc genhtml_branch_coverage=1 00:04:41.355 --rc genhtml_function_coverage=1 00:04:41.355 --rc genhtml_legend=1 00:04:41.355 --rc geninfo_all_blocks=1 00:04:41.355 --rc geninfo_unexecuted_blocks=1 00:04:41.355 00:04:41.355 ' 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.355 --rc genhtml_branch_coverage=1 00:04:41.355 --rc genhtml_function_coverage=1 00:04:41.355 --rc genhtml_legend=1 00:04:41.355 --rc geninfo_all_blocks=1 00:04:41.355 --rc geninfo_unexecuted_blocks=1 00:04:41.355 00:04:41.355 ' 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.355 --rc genhtml_branch_coverage=1 00:04:41.355 --rc genhtml_function_coverage=1 00:04:41.355 --rc genhtml_legend=1 00:04:41.355 --rc geninfo_all_blocks=1 00:04:41.355 --rc geninfo_unexecuted_blocks=1 00:04:41.355 00:04:41.355 ' 00:04:41.355 20:31:58 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.355 --rc genhtml_branch_coverage=1 00:04:41.355 --rc genhtml_function_coverage=1 00:04:41.355 --rc genhtml_legend=1 00:04:41.355 --rc geninfo_all_blocks=1 00:04:41.355 --rc geninfo_unexecuted_blocks=1 00:04:41.355 00:04:41.355 ' 00:04:41.355 20:31:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.355 OK 00:04:41.616 20:31:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.616 00:04:41.616 real 0m0.184s 00:04:41.617 user 0m0.102s 00:04:41.617 sys 0m0.087s 00:04:41.617 20:31:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.617 20:31:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.617 ************************************ 00:04:41.617 END TEST rpc_client 00:04:41.617 ************************************ 00:04:41.617 20:31:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.617 20:31:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.617 20:31:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.617 20:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:41.617 ************************************ 00:04:41.617 START TEST json_config 00:04:41.617 ************************************ 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.617 20:31:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.617 20:31:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.617 20:31:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.617 20:31:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.617 20:31:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.617 20:31:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.617 20:31:58 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.617 20:31:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.617 20:31:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.617 20:31:58 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.617 20:31:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.617 20:31:58 json_config -- scripts/common.sh@355 -- # echo 2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.617 20:31:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.617 20:31:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.617 20:31:58 json_config -- scripts/common.sh@368 -- # return 0 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f72aea25-a2ce-4611-a8a8-77f4a743cbb5 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f72aea25-a2ce-4611-a8a8-77f4a743cbb5 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.617 20:31:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.617 20:31:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.617 20:31:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.617 20:31:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.617 20:31:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.617 20:31:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.617 20:31:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.617 20:31:58 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.617 20:31:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@51 -- # : 0 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.617 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.617 20:31:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:41.617 WARNING: No tests are enabled so not running JSON configuration tests 00:04:41.617 20:31:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:41.617 00:04:41.617 real 0m0.142s 00:04:41.617 user 0m0.093s 00:04:41.617 sys 0m0.051s 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.617 20:31:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.617 ************************************ 00:04:41.617 END TEST json_config 00:04:41.617 ************************************ 00:04:41.617 20:31:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.617 20:31:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.617 20:31:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.617 20:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:41.617 ************************************ 00:04:41.617 START TEST json_config_extra_key 00:04:41.617 ************************************ 00:04:41.618 20:31:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.878 20:31:58 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.879 --rc genhtml_branch_coverage=1 00:04:41.879 --rc genhtml_function_coverage=1 00:04:41.879 --rc genhtml_legend=1 00:04:41.879 --rc geninfo_all_blocks=1 00:04:41.879 --rc geninfo_unexecuted_blocks=1 00:04:41.879 00:04:41.879 ' 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.879 --rc genhtml_branch_coverage=1 00:04:41.879 --rc genhtml_function_coverage=1 00:04:41.879 --rc genhtml_legend=1 00:04:41.879 --rc geninfo_all_blocks=1 00:04:41.879 --rc geninfo_unexecuted_blocks=1 00:04:41.879 00:04:41.879 ' 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.879 --rc genhtml_branch_coverage=1 00:04:41.879 --rc genhtml_function_coverage=1 00:04:41.879 --rc genhtml_legend=1 00:04:41.879 --rc geninfo_all_blocks=1 00:04:41.879 --rc geninfo_unexecuted_blocks=1 00:04:41.879 00:04:41.879 ' 00:04:41.879 20:31:58 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.879 --rc genhtml_branch_coverage=1 00:04:41.879 --rc genhtml_function_coverage=1 00:04:41.879 --rc genhtml_legend=1 00:04:41.879 --rc geninfo_all_blocks=1 00:04:41.879 --rc geninfo_unexecuted_blocks=1 00:04:41.879 00:04:41.879 ' 00:04:41.879 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f72aea25-a2ce-4611-a8a8-77f4a743cbb5 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f72aea25-a2ce-4611-a8a8-77f4a743cbb5 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.879 20:31:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.879 20:31:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.879 20:31:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.879 20:31:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.879 20:31:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.879 20:31:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.879 20:31:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.879 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.880 INFO: launching applications... 00:04:41.880 20:31:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57784 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.880 Waiting for target to run... 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57784 /var/tmp/spdk_tgt.sock 00:04:41.880 20:31:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.880 20:31:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57784 ']' 00:04:41.880 20:31:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.880 20:31:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.880 20:31:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.880 20:31:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.880 20:31:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.880 [2024-12-06 20:31:58.927678] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:41.880 [2024-12-06 20:31:58.927954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57784 ] 00:04:42.140 [2024-12-06 20:31:59.255273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.401 [2024-12-06 20:31:59.347899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.974 20:31:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.974 20:31:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.974 00:04:42.974 INFO: shutting down applications... 00:04:42.974 20:31:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.974 20:31:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57784 ]] 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57784 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:04:42.974 20:31:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.237 20:32:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.237 20:32:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.237 20:32:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:04:43.237 20:32:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.810 20:32:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.810 20:32:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.810 20:32:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:04:43.810 20:32:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.383 20:32:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.383 20:32:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.383 20:32:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:04:44.383 20:32:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57784 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.027 20:32:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.027 SPDK target shutdown done 00:04:45.027 Success 00:04:45.027 20:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:45.027 ************************************ 00:04:45.027 END TEST json_config_extra_key 00:04:45.027 ************************************ 00:04:45.027 00:04:45.027 real 0m3.154s 00:04:45.027 user 0m2.664s 00:04:45.027 sys 0m0.386s 00:04:45.027 20:32:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.027 20:32:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.027 20:32:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.027 20:32:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.027 20:32:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.027 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:45.027 ************************************ 00:04:45.027 START TEST alias_rpc 00:04:45.027 ************************************ 00:04:45.027 20:32:01 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.027 * Looking for test storage... 00:04:45.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:45.027 20:32:01 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.027 20:32:01 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.027 20:32:01 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.027 20:32:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.027 20:32:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.028 20:32:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.028 --rc genhtml_branch_coverage=1 00:04:45.028 --rc genhtml_function_coverage=1 00:04:45.028 --rc genhtml_legend=1 00:04:45.028 --rc geninfo_all_blocks=1 00:04:45.028 --rc geninfo_unexecuted_blocks=1 00:04:45.028 00:04:45.028 ' 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.028 --rc genhtml_branch_coverage=1 00:04:45.028 --rc genhtml_function_coverage=1 00:04:45.028 --rc genhtml_legend=1 00:04:45.028 --rc geninfo_all_blocks=1 00:04:45.028 --rc geninfo_unexecuted_blocks=1 00:04:45.028 00:04:45.028 ' 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.028 --rc genhtml_branch_coverage=1 00:04:45.028 --rc genhtml_function_coverage=1 00:04:45.028 --rc genhtml_legend=1 00:04:45.028 --rc geninfo_all_blocks=1 00:04:45.028 --rc geninfo_unexecuted_blocks=1 00:04:45.028 00:04:45.028 ' 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.028 --rc genhtml_branch_coverage=1 00:04:45.028 --rc genhtml_function_coverage=1 00:04:45.028 --rc genhtml_legend=1 00:04:45.028 --rc geninfo_all_blocks=1 00:04:45.028 --rc geninfo_unexecuted_blocks=1 00:04:45.028 00:04:45.028 ' 00:04:45.028 20:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.028 20:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57877 00:04:45.028 20:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57877 00:04:45.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57877 ']' 00:04:45.028 20:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.028 20:32:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.028 [2024-12-06 20:32:02.131095] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:45.028 [2024-12-06 20:32:02.131215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57877 ] 00:04:45.290 [2024-12-06 20:32:02.288739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.290 [2024-12-06 20:32:02.388584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.863 20:32:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.863 20:32:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.863 20:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:46.125 20:32:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57877 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57877 ']' 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57877 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57877 00:04:46.125 killing process with pid 57877 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57877' 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 57877 00:04:46.125 20:32:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 57877 00:04:48.034 ************************************ 00:04:48.034 END TEST alias_rpc 00:04:48.034 ************************************ 00:04:48.034 00:04:48.034 real 0m2.847s 00:04:48.034 user 0m2.946s 00:04:48.034 sys 0m0.399s 00:04:48.034 20:32:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.034 20:32:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.034 20:32:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:48.034 20:32:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:48.034 20:32:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.034 20:32:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.034 20:32:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.034 ************************************ 00:04:48.034 START TEST spdkcli_tcp 00:04:48.034 ************************************ 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:48.034 * Looking for test storage... 00:04:48.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.034 20:32:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.034 --rc genhtml_branch_coverage=1 00:04:48.034 --rc genhtml_function_coverage=1 00:04:48.034 --rc genhtml_legend=1 00:04:48.034 --rc geninfo_all_blocks=1 00:04:48.034 --rc geninfo_unexecuted_blocks=1 00:04:48.034 00:04:48.034 ' 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.034 --rc genhtml_branch_coverage=1 00:04:48.034 --rc genhtml_function_coverage=1 00:04:48.034 --rc genhtml_legend=1 00:04:48.034 --rc geninfo_all_blocks=1 00:04:48.034 --rc geninfo_unexecuted_blocks=1 00:04:48.034 00:04:48.034 ' 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.034 --rc genhtml_branch_coverage=1 00:04:48.034 --rc genhtml_function_coverage=1 00:04:48.034 --rc genhtml_legend=1 00:04:48.034 --rc geninfo_all_blocks=1 00:04:48.034 --rc geninfo_unexecuted_blocks=1 00:04:48.034 00:04:48.034 ' 00:04:48.034 20:32:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.034 --rc genhtml_branch_coverage=1 00:04:48.034 --rc genhtml_function_coverage=1 00:04:48.034 --rc genhtml_legend=1 00:04:48.034 --rc geninfo_all_blocks=1 00:04:48.034 --rc geninfo_unexecuted_blocks=1 00:04:48.034 00:04:48.034 ' 00:04:48.034 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:48.034 20:32:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57973 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57973 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57973 ']' 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.035 20:32:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.035 20:32:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:48.035 [2024-12-06 20:32:05.035476] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:48.035 [2024-12-06 20:32:05.035744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57973 ] 00:04:48.298 [2024-12-06 20:32:05.197377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.298 [2024-12-06 20:32:05.297360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.298 [2024-12-06 20:32:05.297523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.872 20:32:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.872 20:32:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:48.872 20:32:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57990 00:04:48.872 20:32:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.872 20:32:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:49.133 [ 00:04:49.133 "bdev_malloc_delete", 00:04:49.133 "bdev_malloc_create", 00:04:49.133 "bdev_null_resize", 00:04:49.133 "bdev_null_delete", 00:04:49.133 "bdev_null_create", 00:04:49.133 "bdev_nvme_cuse_unregister", 00:04:49.133 "bdev_nvme_cuse_register", 00:04:49.133 "bdev_opal_new_user", 00:04:49.133 "bdev_opal_set_lock_state", 00:04:49.133 "bdev_opal_delete", 00:04:49.133 "bdev_opal_get_info", 00:04:49.133 "bdev_opal_create", 00:04:49.133 "bdev_nvme_opal_revert", 00:04:49.133 "bdev_nvme_opal_init", 00:04:49.133 "bdev_nvme_send_cmd", 00:04:49.133 "bdev_nvme_set_keys", 00:04:49.133 "bdev_nvme_get_path_iostat", 00:04:49.133 "bdev_nvme_get_mdns_discovery_info", 00:04:49.133 "bdev_nvme_stop_mdns_discovery", 00:04:49.133 "bdev_nvme_start_mdns_discovery", 00:04:49.133 "bdev_nvme_set_multipath_policy", 00:04:49.133 "bdev_nvme_set_preferred_path", 00:04:49.133 "bdev_nvme_get_io_paths", 00:04:49.133 "bdev_nvme_remove_error_injection", 00:04:49.133 "bdev_nvme_add_error_injection", 00:04:49.133 "bdev_nvme_get_discovery_info", 00:04:49.133 "bdev_nvme_stop_discovery", 00:04:49.133 "bdev_nvme_start_discovery", 00:04:49.133 "bdev_nvme_get_controller_health_info", 00:04:49.133 "bdev_nvme_disable_controller", 00:04:49.133 "bdev_nvme_enable_controller", 00:04:49.133 "bdev_nvme_reset_controller", 00:04:49.133 "bdev_nvme_get_transport_statistics", 00:04:49.133 "bdev_nvme_apply_firmware", 00:04:49.133 "bdev_nvme_detach_controller", 00:04:49.133 "bdev_nvme_get_controllers", 00:04:49.133 "bdev_nvme_attach_controller", 00:04:49.133 "bdev_nvme_set_hotplug", 00:04:49.133 "bdev_nvme_set_options", 00:04:49.133 "bdev_passthru_delete", 00:04:49.133 "bdev_passthru_create", 00:04:49.133 "bdev_lvol_set_parent_bdev", 00:04:49.133 "bdev_lvol_set_parent", 00:04:49.133 "bdev_lvol_check_shallow_copy", 00:04:49.133 "bdev_lvol_start_shallow_copy", 00:04:49.133 "bdev_lvol_grow_lvstore", 00:04:49.133 "bdev_lvol_get_lvols", 00:04:49.133 "bdev_lvol_get_lvstores", 00:04:49.133 "bdev_lvol_delete", 00:04:49.133 "bdev_lvol_set_read_only", 00:04:49.133 "bdev_lvol_resize", 00:04:49.133 "bdev_lvol_decouple_parent", 00:04:49.133 "bdev_lvol_inflate", 00:04:49.133 "bdev_lvol_rename", 00:04:49.133 "bdev_lvol_clone_bdev", 00:04:49.133 "bdev_lvol_clone", 00:04:49.133 "bdev_lvol_snapshot", 00:04:49.133 "bdev_lvol_create", 00:04:49.133 "bdev_lvol_delete_lvstore", 00:04:49.133 "bdev_lvol_rename_lvstore", 00:04:49.133 "bdev_lvol_create_lvstore", 00:04:49.133 "bdev_raid_set_options", 00:04:49.133 "bdev_raid_remove_base_bdev", 00:04:49.133 "bdev_raid_add_base_bdev", 00:04:49.133 "bdev_raid_delete", 00:04:49.133 "bdev_raid_create", 00:04:49.133 "bdev_raid_get_bdevs", 00:04:49.133 "bdev_error_inject_error", 00:04:49.133 "bdev_error_delete", 00:04:49.133 "bdev_error_create", 00:04:49.133 "bdev_split_delete", 00:04:49.133 "bdev_split_create", 00:04:49.133 "bdev_delay_delete", 00:04:49.133 "bdev_delay_create", 00:04:49.133 "bdev_delay_update_latency", 00:04:49.133 "bdev_zone_block_delete", 00:04:49.133 "bdev_zone_block_create", 00:04:49.133 "blobfs_create", 00:04:49.133 "blobfs_detect", 00:04:49.133 "blobfs_set_cache_size", 00:04:49.133 "bdev_xnvme_delete", 00:04:49.133 "bdev_xnvme_create", 00:04:49.133 "bdev_aio_delete", 00:04:49.133 "bdev_aio_rescan", 00:04:49.133 "bdev_aio_create", 00:04:49.133 "bdev_ftl_set_property", 00:04:49.133 "bdev_ftl_get_properties", 00:04:49.133 "bdev_ftl_get_stats", 00:04:49.133 "bdev_ftl_unmap", 00:04:49.133 "bdev_ftl_unload", 00:04:49.133 "bdev_ftl_delete", 00:04:49.133 "bdev_ftl_load", 00:04:49.133 "bdev_ftl_create", 00:04:49.133 "bdev_virtio_attach_controller", 00:04:49.133 "bdev_virtio_scsi_get_devices", 00:04:49.133 "bdev_virtio_detach_controller", 00:04:49.133 "bdev_virtio_blk_set_hotplug", 00:04:49.133 "bdev_iscsi_delete", 00:04:49.133 "bdev_iscsi_create", 00:04:49.133 "bdev_iscsi_set_options", 00:04:49.133 "accel_error_inject_error", 00:04:49.133 "ioat_scan_accel_module", 00:04:49.133 "dsa_scan_accel_module", 00:04:49.133 "iaa_scan_accel_module", 00:04:49.133 "keyring_file_remove_key", 00:04:49.133 "keyring_file_add_key", 00:04:49.133 "keyring_linux_set_options", 00:04:49.133 "fsdev_aio_delete", 00:04:49.133 "fsdev_aio_create", 00:04:49.133 "iscsi_get_histogram", 00:04:49.133 "iscsi_enable_histogram", 00:04:49.133 "iscsi_set_options", 00:04:49.133 "iscsi_get_auth_groups", 00:04:49.133 "iscsi_auth_group_remove_secret", 00:04:49.133 "iscsi_auth_group_add_secret", 00:04:49.133 "iscsi_delete_auth_group", 00:04:49.133 "iscsi_create_auth_group", 00:04:49.133 "iscsi_set_discovery_auth", 00:04:49.133 "iscsi_get_options", 00:04:49.133 "iscsi_target_node_request_logout", 00:04:49.133 "iscsi_target_node_set_redirect", 00:04:49.133 "iscsi_target_node_set_auth", 00:04:49.133 "iscsi_target_node_add_lun", 00:04:49.133 "iscsi_get_stats", 00:04:49.133 "iscsi_get_connections", 00:04:49.133 "iscsi_portal_group_set_auth", 00:04:49.133 "iscsi_start_portal_group", 00:04:49.133 "iscsi_delete_portal_group", 00:04:49.133 "iscsi_create_portal_group", 00:04:49.133 "iscsi_get_portal_groups", 00:04:49.133 "iscsi_delete_target_node", 00:04:49.133 "iscsi_target_node_remove_pg_ig_maps", 00:04:49.133 "iscsi_target_node_add_pg_ig_maps", 00:04:49.133 "iscsi_create_target_node", 00:04:49.133 "iscsi_get_target_nodes", 00:04:49.133 "iscsi_delete_initiator_group", 00:04:49.133 "iscsi_initiator_group_remove_initiators", 00:04:49.133 "iscsi_initiator_group_add_initiators", 00:04:49.133 "iscsi_create_initiator_group", 00:04:49.133 "iscsi_get_initiator_groups", 00:04:49.133 "nvmf_set_crdt", 00:04:49.133 "nvmf_set_config", 00:04:49.133 "nvmf_set_max_subsystems", 00:04:49.133 "nvmf_stop_mdns_prr", 00:04:49.133 "nvmf_publish_mdns_prr", 00:04:49.133 "nvmf_subsystem_get_listeners", 00:04:49.133 "nvmf_subsystem_get_qpairs", 00:04:49.133 "nvmf_subsystem_get_controllers", 00:04:49.133 "nvmf_get_stats", 00:04:49.133 "nvmf_get_transports", 00:04:49.133 "nvmf_create_transport", 00:04:49.133 "nvmf_get_targets", 00:04:49.133 "nvmf_delete_target", 00:04:49.133 "nvmf_create_target", 00:04:49.133 "nvmf_subsystem_allow_any_host", 00:04:49.133 "nvmf_subsystem_set_keys", 00:04:49.133 "nvmf_subsystem_remove_host", 00:04:49.133 "nvmf_subsystem_add_host", 00:04:49.133 "nvmf_ns_remove_host", 00:04:49.133 "nvmf_ns_add_host", 00:04:49.133 "nvmf_subsystem_remove_ns", 00:04:49.133 "nvmf_subsystem_set_ns_ana_group", 00:04:49.133 "nvmf_subsystem_add_ns", 00:04:49.133 "nvmf_subsystem_listener_set_ana_state", 00:04:49.133 "nvmf_discovery_get_referrals", 00:04:49.133 "nvmf_discovery_remove_referral", 00:04:49.133 "nvmf_discovery_add_referral", 00:04:49.133 "nvmf_subsystem_remove_listener", 00:04:49.133 "nvmf_subsystem_add_listener", 00:04:49.133 "nvmf_delete_subsystem", 00:04:49.133 "nvmf_create_subsystem", 00:04:49.133 "nvmf_get_subsystems", 00:04:49.133 "env_dpdk_get_mem_stats", 00:04:49.133 "nbd_get_disks", 00:04:49.133 "nbd_stop_disk", 00:04:49.133 "nbd_start_disk", 00:04:49.133 "ublk_recover_disk", 00:04:49.133 "ublk_get_disks", 00:04:49.133 "ublk_stop_disk", 00:04:49.133 "ublk_start_disk", 00:04:49.133 "ublk_destroy_target", 00:04:49.133 "ublk_create_target", 00:04:49.133 "virtio_blk_create_transport", 00:04:49.133 "virtio_blk_get_transports", 00:04:49.133 "vhost_controller_set_coalescing", 00:04:49.133 "vhost_get_controllers", 00:04:49.133 "vhost_delete_controller", 00:04:49.133 "vhost_create_blk_controller", 00:04:49.133 "vhost_scsi_controller_remove_target", 00:04:49.133 "vhost_scsi_controller_add_target", 00:04:49.133 "vhost_start_scsi_controller", 00:04:49.133 "vhost_create_scsi_controller", 00:04:49.133 "thread_set_cpumask", 00:04:49.133 "scheduler_set_options", 00:04:49.133 "framework_get_governor", 00:04:49.133 "framework_get_scheduler", 00:04:49.133 "framework_set_scheduler", 00:04:49.133 "framework_get_reactors", 00:04:49.133 "thread_get_io_channels", 00:04:49.134 "thread_get_pollers", 00:04:49.134 "thread_get_stats", 00:04:49.134 "framework_monitor_context_switch", 00:04:49.134 "spdk_kill_instance", 00:04:49.134 "log_enable_timestamps", 00:04:49.134 "log_get_flags", 00:04:49.134 "log_clear_flag", 00:04:49.134 "log_set_flag", 00:04:49.134 "log_get_level", 00:04:49.134 "log_set_level", 00:04:49.134 "log_get_print_level", 00:04:49.134 "log_set_print_level", 00:04:49.134 "framework_enable_cpumask_locks", 00:04:49.134 "framework_disable_cpumask_locks", 00:04:49.134 "framework_wait_init", 00:04:49.134 "framework_start_init", 00:04:49.134 "scsi_get_devices", 00:04:49.134 "bdev_get_histogram", 00:04:49.134 "bdev_enable_histogram", 00:04:49.134 "bdev_set_qos_limit", 00:04:49.134 "bdev_set_qd_sampling_period", 00:04:49.134 "bdev_get_bdevs", 00:04:49.134 "bdev_reset_iostat", 00:04:49.134 "bdev_get_iostat", 00:04:49.134 "bdev_examine", 00:04:49.134 "bdev_wait_for_examine", 00:04:49.134 "bdev_set_options", 00:04:49.134 "accel_get_stats", 00:04:49.134 "accel_set_options", 00:04:49.134 "accel_set_driver", 00:04:49.134 "accel_crypto_key_destroy", 00:04:49.134 "accel_crypto_keys_get", 00:04:49.134 "accel_crypto_key_create", 00:04:49.134 "accel_assign_opc", 00:04:49.134 "accel_get_module_info", 00:04:49.134 "accel_get_opc_assignments", 00:04:49.134 "vmd_rescan", 00:04:49.134 "vmd_remove_device", 00:04:49.134 "vmd_enable", 00:04:49.134 "sock_get_default_impl", 00:04:49.134 "sock_set_default_impl", 00:04:49.134 "sock_impl_set_options", 00:04:49.134 "sock_impl_get_options", 00:04:49.134 "iobuf_get_stats", 00:04:49.134 "iobuf_set_options", 00:04:49.134 "keyring_get_keys", 00:04:49.134 "framework_get_pci_devices", 00:04:49.134 "framework_get_config", 00:04:49.134 "framework_get_subsystems", 00:04:49.134 "fsdev_set_opts", 00:04:49.134 "fsdev_get_opts", 00:04:49.134 "trace_get_info", 00:04:49.134 "trace_get_tpoint_group_mask", 00:04:49.134 "trace_disable_tpoint_group", 00:04:49.134 "trace_enable_tpoint_group", 00:04:49.134 "trace_clear_tpoint_mask", 00:04:49.134 "trace_set_tpoint_mask", 00:04:49.134 "notify_get_notifications", 00:04:49.134 "notify_get_types", 00:04:49.134 "spdk_get_version", 00:04:49.134 "rpc_get_methods" 00:04:49.134 ] 00:04:49.134 20:32:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.134 20:32:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:49.134 20:32:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57973 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57973 ']' 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57973 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57973 00:04:49.134 killing process with pid 57973 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57973' 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57973 00:04:49.134 20:32:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57973 00:04:51.051 ************************************ 00:04:51.051 END TEST spdkcli_tcp 00:04:51.051 ************************************ 00:04:51.051 00:04:51.051 real 0m2.839s 00:04:51.051 user 0m5.051s 00:04:51.051 sys 0m0.462s 00:04:51.051 20:32:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.051 20:32:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.051 20:32:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.051 20:32:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.051 20:32:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.051 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.051 ************************************ 00:04:51.051 START TEST dpdk_mem_utility 00:04:51.051 ************************************ 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.051 * Looking for test storage... 00:04:51.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.051 20:32:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.051 --rc genhtml_branch_coverage=1 00:04:51.051 --rc genhtml_function_coverage=1 00:04:51.051 --rc genhtml_legend=1 00:04:51.051 --rc geninfo_all_blocks=1 00:04:51.051 --rc geninfo_unexecuted_blocks=1 00:04:51.051 00:04:51.051 ' 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.051 --rc genhtml_branch_coverage=1 00:04:51.051 --rc genhtml_function_coverage=1 00:04:51.051 --rc genhtml_legend=1 00:04:51.051 --rc geninfo_all_blocks=1 00:04:51.051 --rc geninfo_unexecuted_blocks=1 00:04:51.051 00:04:51.051 ' 00:04:51.051 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.051 --rc genhtml_branch_coverage=1 00:04:51.051 --rc genhtml_function_coverage=1 00:04:51.051 --rc genhtml_legend=1 00:04:51.052 --rc geninfo_all_blocks=1 00:04:51.052 --rc geninfo_unexecuted_blocks=1 00:04:51.052 00:04:51.052 ' 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.052 --rc genhtml_branch_coverage=1 00:04:51.052 --rc genhtml_function_coverage=1 00:04:51.052 --rc genhtml_legend=1 00:04:51.052 --rc geninfo_all_blocks=1 00:04:51.052 --rc geninfo_unexecuted_blocks=1 00:04:51.052 00:04:51.052 ' 00:04:51.052 20:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.052 20:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58078 00:04:51.052 20:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58078 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58078 ']' 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.052 20:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.052 20:32:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.052 [2024-12-06 20:32:07.906150] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:51.052 [2024-12-06 20:32:07.906273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58078 ] 00:04:51.052 [2024-12-06 20:32:08.063134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.052 [2024-12-06 20:32:08.162926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.625 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.625 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:51.625 20:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.625 20:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.625 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.625 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.888 { 00:04:51.888 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.888 } 00:04:51.888 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.888 20:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.888 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:51.888 1 heaps totaling size 824.000000 MiB 00:04:51.888 size: 824.000000 MiB heap id: 0 00:04:51.888 end heaps---------- 00:04:51.888 9 mempools totaling size 603.782043 MiB 00:04:51.888 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.888 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.888 size: 100.555481 MiB name: bdev_io_58078 00:04:51.888 size: 50.003479 MiB name: msgpool_58078 00:04:51.888 size: 36.509338 MiB name: fsdev_io_58078 00:04:51.888 size: 21.763794 MiB name: PDU_Pool 00:04:51.888 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.888 size: 4.133484 MiB name: evtpool_58078 00:04:51.888 size: 0.026123 MiB name: Session_Pool 00:04:51.888 end mempools------- 00:04:51.888 6 memzones totaling size 4.142822 MiB 00:04:51.888 size: 1.000366 MiB name: RG_ring_0_58078 00:04:51.888 size: 1.000366 MiB name: RG_ring_1_58078 00:04:51.888 size: 1.000366 MiB name: RG_ring_4_58078 00:04:51.888 size: 1.000366 MiB name: RG_ring_5_58078 00:04:51.888 size: 0.125366 MiB name: RG_ring_2_58078 00:04:51.888 size: 0.015991 MiB name: RG_ring_3_58078 00:04:51.888 end memzones------- 00:04:51.888 20:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.888 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:04:51.888 list of free elements. size: 16.779907 MiB 00:04:51.889 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:51.889 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:51.889 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:51.889 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:51.889 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:51.889 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:51.889 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:51.889 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:51.889 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:51.889 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:51.889 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:51.889 element at address: 0x20001b400000 with size: 0.560486 MiB 00:04:51.889 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:51.889 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:51.889 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:51.889 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:51.889 element at address: 0x200028800000 with size: 0.391418 MiB 00:04:51.889 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:51.889 list of standard malloc elements. size: 199.289185 MiB 00:04:51.889 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:51.889 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:51.889 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:51.889 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:51.889 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:51.889 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:51.889 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:51.889 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:51.889 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:51.889 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:51.889 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:51.889 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:51.889 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:51.889 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:51.889 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:51.890 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:51.891 element at address: 0x200028864340 with size: 0.000244 MiB 00:04:51.891 element at address: 0x200028864440 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b100 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:51.891 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:51.891 list of memzone associated elements. size: 607.930908 MiB 00:04:51.891 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:51.891 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.891 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:51.891 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.891 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:51.891 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58078_0 00:04:51.891 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:51.891 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58078_0 00:04:51.891 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:51.891 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58078_0 00:04:51.891 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:51.891 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.891 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:51.891 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.891 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:51.891 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58078_0 00:04:51.891 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:51.891 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58078 00:04:51.891 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:51.891 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58078 00:04:51.891 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:51.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.891 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:51.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.891 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:51.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.891 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:51.891 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.891 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:51.891 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58078 00:04:51.891 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:51.891 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58078 00:04:51.891 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:51.891 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58078 00:04:51.891 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:51.891 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58078 00:04:51.891 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:51.891 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58078 00:04:51.892 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:51.892 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58078 00:04:51.892 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:51.892 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.892 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:51.892 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.892 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:51.892 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.892 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:51.892 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58078 00:04:51.892 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:51.892 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58078 00:04:51.892 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:51.892 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.892 element at address: 0x200028864540 with size: 0.023804 MiB 00:04:51.892 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.892 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:51.892 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58078 00:04:51.892 element at address: 0x20002886a6c0 with size: 0.002502 MiB 00:04:51.892 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.892 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:51.892 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58078 00:04:51.892 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:51.892 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58078 00:04:51.892 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:51.892 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58078 00:04:51.892 element at address: 0x20002886b200 with size: 0.000366 MiB 00:04:51.892 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.892 20:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.892 20:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58078 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58078 ']' 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58078 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58078 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58078' 00:04:51.892 killing process with pid 58078 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58078 00:04:51.892 20:32:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58078 00:04:53.278 00:04:53.278 real 0m2.652s 00:04:53.278 user 0m2.675s 00:04:53.278 sys 0m0.380s 00:04:53.278 20:32:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.278 20:32:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.279 ************************************ 00:04:53.279 END TEST dpdk_mem_utility 00:04:53.279 ************************************ 00:04:53.279 20:32:10 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.279 20:32:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.279 20:32:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.279 20:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.279 ************************************ 00:04:53.279 START TEST event 00:04:53.279 ************************************ 00:04:53.279 20:32:10 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.540 * Looking for test storage... 00:04:53.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.540 20:32:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.540 20:32:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.540 20:32:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.540 20:32:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.540 20:32:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.540 20:32:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.540 20:32:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.540 20:32:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.540 20:32:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.540 20:32:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.540 20:32:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.540 20:32:10 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.540 20:32:10 event -- scripts/common.sh@345 -- # : 1 00:04:53.540 20:32:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.540 20:32:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.540 20:32:10 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.540 20:32:10 event -- scripts/common.sh@353 -- # local d=1 00:04:53.540 20:32:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.540 20:32:10 event -- scripts/common.sh@355 -- # echo 1 00:04:53.540 20:32:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.540 20:32:10 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.540 20:32:10 event -- scripts/common.sh@353 -- # local d=2 00:04:53.540 20:32:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.540 20:32:10 event -- scripts/common.sh@355 -- # echo 2 00:04:53.540 20:32:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.540 20:32:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.540 20:32:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.540 20:32:10 event -- scripts/common.sh@368 -- # return 0 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.540 --rc genhtml_branch_coverage=1 00:04:53.540 --rc genhtml_function_coverage=1 00:04:53.540 --rc genhtml_legend=1 00:04:53.540 --rc geninfo_all_blocks=1 00:04:53.540 --rc geninfo_unexecuted_blocks=1 00:04:53.540 00:04:53.540 ' 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.540 --rc genhtml_branch_coverage=1 00:04:53.540 --rc genhtml_function_coverage=1 00:04:53.540 --rc genhtml_legend=1 00:04:53.540 --rc geninfo_all_blocks=1 00:04:53.540 --rc geninfo_unexecuted_blocks=1 00:04:53.540 00:04:53.540 ' 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.540 --rc genhtml_branch_coverage=1 00:04:53.540 --rc genhtml_function_coverage=1 00:04:53.540 --rc genhtml_legend=1 00:04:53.540 --rc geninfo_all_blocks=1 00:04:53.540 --rc geninfo_unexecuted_blocks=1 00:04:53.540 00:04:53.540 ' 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.540 --rc genhtml_branch_coverage=1 00:04:53.540 --rc genhtml_function_coverage=1 00:04:53.540 --rc genhtml_legend=1 00:04:53.540 --rc geninfo_all_blocks=1 00:04:53.540 --rc geninfo_unexecuted_blocks=1 00:04:53.540 00:04:53.540 ' 00:04:53.540 20:32:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.540 20:32:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.540 20:32:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:53.540 20:32:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.540 20:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.540 ************************************ 00:04:53.540 START TEST event_perf 00:04:53.540 ************************************ 00:04:53.540 20:32:10 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.540 Running I/O for 1 seconds...[2024-12-06 20:32:10.570078] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:53.540 [2024-12-06 20:32:10.570322] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58175 ] 00:04:53.801 [2024-12-06 20:32:10.727466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.801 [2024-12-06 20:32:10.819600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.801 [2024-12-06 20:32:10.819813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.801 [2024-12-06 20:32:10.820302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.801 Running I/O for 1 seconds...[2024-12-06 20:32:10.820173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.227 00:04:55.227 lcore 0: 200210 00:04:55.227 lcore 1: 200212 00:04:55.227 lcore 2: 200211 00:04:55.227 lcore 3: 200210 00:04:55.227 done. 00:04:55.227 ************************************ 00:04:55.227 END TEST event_perf 00:04:55.227 ************************************ 00:04:55.227 00:04:55.227 real 0m1.416s 00:04:55.227 user 0m4.209s 00:04:55.227 sys 0m0.079s 00:04:55.227 20:32:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.227 20:32:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.227 20:32:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.227 20:32:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.227 20:32:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.227 20:32:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.227 ************************************ 00:04:55.227 START TEST event_reactor 00:04:55.227 ************************************ 00:04:55.227 20:32:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.227 [2024-12-06 20:32:12.032265] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:55.227 [2024-12-06 20:32:12.032353] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58215 ] 00:04:55.227 [2024-12-06 20:32:12.182702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.227 [2024-12-06 20:32:12.263928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.609 test_start 00:04:56.609 oneshot 00:04:56.609 tick 100 00:04:56.609 tick 100 00:04:56.609 tick 250 00:04:56.609 tick 100 00:04:56.609 tick 100 00:04:56.609 tick 250 00:04:56.609 tick 100 00:04:56.609 tick 500 00:04:56.609 tick 100 00:04:56.609 tick 100 00:04:56.609 tick 250 00:04:56.609 tick 100 00:04:56.609 tick 100 00:04:56.609 test_end 00:04:56.609 00:04:56.609 real 0m1.381s 00:04:56.609 user 0m1.217s 00:04:56.609 sys 0m0.057s 00:04:56.609 20:32:13 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.610 ************************************ 00:04:56.610 END TEST event_reactor 00:04:56.610 ************************************ 00:04:56.610 20:32:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:56.610 20:32:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.610 20:32:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:56.610 20:32:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.610 20:32:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.610 ************************************ 00:04:56.610 START TEST event_reactor_perf 00:04:56.610 ************************************ 00:04:56.610 20:32:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.610 [2024-12-06 20:32:13.470445] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:56.610 [2024-12-06 20:32:13.470555] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58246 ] 00:04:56.610 [2024-12-06 20:32:13.627302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.610 [2024-12-06 20:32:13.709330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.992 test_start 00:04:57.992 test_end 00:04:57.992 Performance: 404559 events per second 00:04:57.992 ************************************ 00:04:57.992 END TEST event_reactor_perf 00:04:57.992 ************************************ 00:04:57.992 00:04:57.992 real 0m1.394s 00:04:57.992 user 0m1.216s 00:04:57.992 sys 0m0.070s 00:04:57.992 20:32:14 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.992 20:32:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.992 20:32:14 event -- event/event.sh@49 -- # uname -s 00:04:57.992 20:32:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:57.992 20:32:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:57.992 20:32:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.992 20:32:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.992 20:32:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.992 ************************************ 00:04:57.992 START TEST event_scheduler 00:04:57.992 ************************************ 00:04:57.992 20:32:14 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:57.992 * Looking for test storage... 00:04:57.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:57.992 20:32:14 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.992 20:32:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.992 20:32:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.992 20:32:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.992 20:32:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.993 20:32:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.993 --rc genhtml_branch_coverage=1 00:04:57.993 --rc genhtml_function_coverage=1 00:04:57.993 --rc genhtml_legend=1 00:04:57.993 --rc geninfo_all_blocks=1 00:04:57.993 --rc geninfo_unexecuted_blocks=1 00:04:57.993 00:04:57.993 ' 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.993 --rc genhtml_branch_coverage=1 00:04:57.993 --rc genhtml_function_coverage=1 00:04:57.993 --rc genhtml_legend=1 00:04:57.993 --rc geninfo_all_blocks=1 00:04:57.993 --rc geninfo_unexecuted_blocks=1 00:04:57.993 00:04:57.993 ' 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.993 --rc genhtml_branch_coverage=1 00:04:57.993 --rc genhtml_function_coverage=1 00:04:57.993 --rc genhtml_legend=1 00:04:57.993 --rc geninfo_all_blocks=1 00:04:57.993 --rc geninfo_unexecuted_blocks=1 00:04:57.993 00:04:57.993 ' 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.993 --rc genhtml_branch_coverage=1 00:04:57.993 --rc genhtml_function_coverage=1 00:04:57.993 --rc genhtml_legend=1 00:04:57.993 --rc geninfo_all_blocks=1 00:04:57.993 --rc geninfo_unexecuted_blocks=1 00:04:57.993 00:04:57.993 ' 00:04:57.993 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:57.993 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58322 00:04:57.993 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:57.993 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.993 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58322 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58322 ']' 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.993 20:32:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.993 [2024-12-06 20:32:15.093268] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:04:57.993 [2024-12-06 20:32:15.094054] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58322 ] 00:04:58.251 [2024-12-06 20:32:15.254147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.251 [2024-12-06 20:32:15.360524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.251 [2024-12-06 20:32:15.360835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.251 [2024-12-06 20:32:15.361757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.251 [2024-12-06 20:32:15.361914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:58.852 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.852 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.852 POWER: Cannot set governor of lcore 0 to performance 00:04:58.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.852 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.852 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.852 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:58.852 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:58.852 POWER: Unable to set Power Management Environment for lcore 0 00:04:58.852 [2024-12-06 20:32:15.955583] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:58.852 [2024-12-06 20:32:15.955605] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:58.852 [2024-12-06 20:32:15.955614] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:58.852 [2024-12-06 20:32:15.955630] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:58.852 [2024-12-06 20:32:15.955637] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:58.852 [2024-12-06 20:32:15.955646] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.852 20:32:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.852 20:32:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 [2024-12-06 20:32:16.185562] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.112 20:32:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.112 20:32:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.112 20:32:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.112 20:32:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.112 20:32:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 ************************************ 00:04:59.112 START TEST scheduler_create_thread 00:04:59.112 ************************************ 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 2 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 3 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 4 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 5 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.112 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 6 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 7 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 8 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 9 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 10 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.374 20:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.312 ************************************ 00:05:00.312 END TEST scheduler_create_thread 00:05:00.312 ************************************ 00:05:00.312 20:32:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.312 00:05:00.312 real 0m1.176s 00:05:00.312 user 0m0.016s 00:05:00.312 sys 0m0.005s 00:05:00.312 20:32:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.312 20:32:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.312 20:32:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.312 20:32:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58322 00:05:00.313 20:32:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58322 ']' 00:05:00.313 20:32:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58322 00:05:00.313 20:32:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:00.313 20:32:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.313 20:32:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58322 00:05:00.574 killing process with pid 58322 00:05:00.574 20:32:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.574 20:32:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.575 20:32:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58322' 00:05:00.575 20:32:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58322 00:05:00.575 20:32:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58322 00:05:00.836 [2024-12-06 20:32:17.859681] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:01.777 00:05:01.777 real 0m3.729s 00:05:01.777 user 0m6.152s 00:05:01.777 sys 0m0.324s 00:05:01.777 ************************************ 00:05:01.777 END TEST event_scheduler 00:05:01.777 ************************************ 00:05:01.777 20:32:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.777 20:32:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.777 20:32:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:01.777 20:32:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:01.777 20:32:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.777 20:32:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.777 20:32:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.777 ************************************ 00:05:01.777 START TEST app_repeat 00:05:01.777 ************************************ 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58406 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.777 Process app_repeat pid: 58406 00:05:01.777 spdk_app_start Round 0 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58406' 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:01.777 20:32:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58406 ']' 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.777 20:32:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.777 [2024-12-06 20:32:18.730686] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:01.777 [2024-12-06 20:32:18.730794] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58406 ] 00:05:01.777 [2024-12-06 20:32:18.891355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.036 [2024-12-06 20:32:18.996514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.036 [2024-12-06 20:32:18.996627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.603 20:32:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.603 20:32:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.603 20:32:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.864 Malloc0 00:05:02.864 20:32:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.127 Malloc1 00:05:03.127 20:32:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.127 20:32:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.390 /dev/nbd0 00:05:03.390 20:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.390 20:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.390 1+0 records in 00:05:03.390 1+0 records out 00:05:03.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454199 s, 9.0 MB/s 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.390 20:32:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.390 20:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.390 20:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.390 20:32:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.650 /dev/nbd1 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.650 1+0 records in 00:05:03.650 1+0 records out 00:05:03.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634676 s, 6.5 MB/s 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.650 20:32:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.650 20:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.909 20:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.909 { 00:05:03.909 "nbd_device": "/dev/nbd0", 00:05:03.909 "bdev_name": "Malloc0" 00:05:03.909 }, 00:05:03.909 { 00:05:03.909 "nbd_device": "/dev/nbd1", 00:05:03.909 "bdev_name": "Malloc1" 00:05:03.909 } 00:05:03.909 ]' 00:05:03.909 20:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.909 { 00:05:03.909 "nbd_device": "/dev/nbd0", 00:05:03.909 "bdev_name": "Malloc0" 00:05:03.909 }, 00:05:03.909 { 00:05:03.909 "nbd_device": "/dev/nbd1", 00:05:03.910 "bdev_name": "Malloc1" 00:05:03.910 } 00:05:03.910 ]' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.910 /dev/nbd1' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.910 /dev/nbd1' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.910 256+0 records in 00:05:03.910 256+0 records out 00:05:03.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120942 s, 86.7 MB/s 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.910 256+0 records in 00:05:03.910 256+0 records out 00:05:03.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247516 s, 42.4 MB/s 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.910 256+0 records in 00:05:03.910 256+0 records out 00:05:03.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0751374 s, 14.0 MB/s 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.910 20:32:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.170 20:32:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.429 20:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.430 20:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.687 20:32:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.687 20:32:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.948 20:32:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.955 [2024-12-06 20:32:22.699860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.955 [2024-12-06 20:32:22.801132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.955 [2024-12-06 20:32:22.801263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.955 [2024-12-06 20:32:22.934908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.955 [2024-12-06 20:32:22.934964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.910 20:32:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.910 spdk_app_start Round 1 00:05:07.910 20:32:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:07.910 20:32:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:07.910 20:32:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58406 ']' 00:05:07.910 20:32:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.910 20:32:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.910 20:32:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.910 20:32:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.910 20:32:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.166 20:32:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.166 20:32:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.166 20:32:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.423 Malloc0 00:05:08.423 20:32:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.680 Malloc1 00:05:08.680 20:32:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.680 20:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.937 /dev/nbd0 00:05:08.937 20:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.937 20:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.937 1+0 records in 00:05:08.937 1+0 records out 00:05:08.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180155 s, 22.7 MB/s 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.937 20:32:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.937 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.937 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.937 20:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.194 /dev/nbd1 00:05:09.194 20:32:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.194 20:32:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.194 20:32:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.194 1+0 records in 00:05:09.194 1+0 records out 00:05:09.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179763 s, 22.8 MB/s 00:05:09.195 20:32:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.195 20:32:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.195 20:32:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.195 20:32:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.195 20:32:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.195 20:32:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.195 20:32:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.195 20:32:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.195 20:32:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.195 20:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.450 20:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.450 { 00:05:09.451 "nbd_device": "/dev/nbd0", 00:05:09.451 "bdev_name": "Malloc0" 00:05:09.451 }, 00:05:09.451 { 00:05:09.451 "nbd_device": "/dev/nbd1", 00:05:09.451 "bdev_name": "Malloc1" 00:05:09.451 } 00:05:09.451 ]' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.451 { 00:05:09.451 "nbd_device": "/dev/nbd0", 00:05:09.451 "bdev_name": "Malloc0" 00:05:09.451 }, 00:05:09.451 { 00:05:09.451 "nbd_device": "/dev/nbd1", 00:05:09.451 "bdev_name": "Malloc1" 00:05:09.451 } 00:05:09.451 ]' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.451 /dev/nbd1' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.451 /dev/nbd1' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.451 256+0 records in 00:05:09.451 256+0 records out 00:05:09.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445141 s, 236 MB/s 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.451 256+0 records in 00:05:09.451 256+0 records out 00:05:09.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188522 s, 55.6 MB/s 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.451 256+0 records in 00:05:09.451 256+0 records out 00:05:09.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240603 s, 43.6 MB/s 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.451 20:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.707 20:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.965 20:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.965 20:32:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.965 20:32:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.965 20:32:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.221 20:32:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.221 20:32:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.479 20:32:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.413 [2024-12-06 20:32:28.185208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.413 [2024-12-06 20:32:28.282932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.413 [2024-12-06 20:32:28.282936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.413 [2024-12-06 20:32:28.410702] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.413 [2024-12-06 20:32:28.410776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.945 spdk_app_start Round 2 00:05:13.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.945 20:32:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.945 20:32:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:13.945 20:32:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58406 ']' 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.945 20:32:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.945 20:32:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.945 Malloc0 00:05:13.945 20:32:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.203 Malloc1 00:05:14.203 20:32:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.203 20:32:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.203 /dev/nbd0 00:05:14.461 20:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.461 20:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.461 20:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.461 20:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.461 20:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.462 1+0 records in 00:05:14.462 1+0 records out 00:05:14.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141965 s, 28.9 MB/s 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.462 /dev/nbd1 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.462 1+0 records in 00:05:14.462 1+0 records out 00:05:14.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235176 s, 17.4 MB/s 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.462 20:32:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.462 20:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.720 { 00:05:14.720 "nbd_device": "/dev/nbd0", 00:05:14.720 "bdev_name": "Malloc0" 00:05:14.720 }, 00:05:14.720 { 00:05:14.720 "nbd_device": "/dev/nbd1", 00:05:14.720 "bdev_name": "Malloc1" 00:05:14.720 } 00:05:14.720 ]' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.720 { 00:05:14.720 "nbd_device": "/dev/nbd0", 00:05:14.720 "bdev_name": "Malloc0" 00:05:14.720 }, 00:05:14.720 { 00:05:14.720 "nbd_device": "/dev/nbd1", 00:05:14.720 "bdev_name": "Malloc1" 00:05:14.720 } 00:05:14.720 ]' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.720 /dev/nbd1' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.720 /dev/nbd1' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.720 256+0 records in 00:05:14.720 256+0 records out 00:05:14.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540007 s, 194 MB/s 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.720 256+0 records in 00:05:14.720 256+0 records out 00:05:14.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018658 s, 56.2 MB/s 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.720 20:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.720 256+0 records in 00:05:14.720 256+0 records out 00:05:14.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158426 s, 66.2 MB/s 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.721 20:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.979 20:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.979 20:32:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.236 20:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.494 20:32:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.494 20:32:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.752 20:32:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.318 [2024-12-06 20:32:33.355268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.318 [2024-12-06 20:32:33.437673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.318 [2024-12-06 20:32:33.437710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.577 [2024-12-06 20:32:33.542104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.577 [2024-12-06 20:32:33.542166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.106 20:32:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:19.106 20:32:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58406 ']' 00:05:19.106 20:32:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.106 20:32:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.106 20:32:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.106 20:32:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.106 20:32:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.106 20:32:36 event.app_repeat -- event/event.sh@39 -- # killprocess 58406 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58406 ']' 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58406 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58406 00:05:19.106 killing process with pid 58406 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58406' 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58406 00:05:19.106 20:32:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58406 00:05:19.672 spdk_app_start is called in Round 0. 00:05:19.672 Shutdown signal received, stop current app iteration 00:05:19.672 Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 reinitialization... 00:05:19.672 spdk_app_start is called in Round 1. 00:05:19.672 Shutdown signal received, stop current app iteration 00:05:19.672 Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 reinitialization... 00:05:19.672 spdk_app_start is called in Round 2. 00:05:19.672 Shutdown signal received, stop current app iteration 00:05:19.672 Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 reinitialization... 00:05:19.672 spdk_app_start is called in Round 3. 00:05:19.672 Shutdown signal received, stop current app iteration 00:05:19.672 ************************************ 00:05:19.672 END TEST app_repeat 00:05:19.672 ************************************ 00:05:19.672 20:32:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.672 20:32:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:19.672 00:05:19.672 real 0m17.951s 00:05:19.672 user 0m39.215s 00:05:19.672 sys 0m2.111s 00:05:19.672 20:32:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.672 20:32:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.672 20:32:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.672 20:32:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.672 20:32:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.672 20:32:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.672 20:32:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.672 ************************************ 00:05:19.672 START TEST cpu_locks 00:05:19.672 ************************************ 00:05:19.672 20:32:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.672 * Looking for test storage... 00:05:19.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:19.672 20:32:36 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.672 20:32:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.672 20:32:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.931 20:32:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.931 --rc genhtml_branch_coverage=1 00:05:19.931 --rc genhtml_function_coverage=1 00:05:19.931 --rc genhtml_legend=1 00:05:19.931 --rc geninfo_all_blocks=1 00:05:19.931 --rc geninfo_unexecuted_blocks=1 00:05:19.931 00:05:19.931 ' 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.931 --rc genhtml_branch_coverage=1 00:05:19.931 --rc genhtml_function_coverage=1 00:05:19.931 --rc genhtml_legend=1 00:05:19.931 --rc geninfo_all_blocks=1 00:05:19.931 --rc geninfo_unexecuted_blocks=1 00:05:19.931 00:05:19.931 ' 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.931 --rc genhtml_branch_coverage=1 00:05:19.931 --rc genhtml_function_coverage=1 00:05:19.931 --rc genhtml_legend=1 00:05:19.931 --rc geninfo_all_blocks=1 00:05:19.931 --rc geninfo_unexecuted_blocks=1 00:05:19.931 00:05:19.931 ' 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.931 --rc genhtml_branch_coverage=1 00:05:19.931 --rc genhtml_function_coverage=1 00:05:19.931 --rc genhtml_legend=1 00:05:19.931 --rc geninfo_all_blocks=1 00:05:19.931 --rc geninfo_unexecuted_blocks=1 00:05:19.931 00:05:19.931 ' 00:05:19.931 20:32:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.931 20:32:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.931 20:32:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.931 20:32:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.931 20:32:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.931 ************************************ 00:05:19.931 START TEST default_locks 00:05:19.931 ************************************ 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58842 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58842 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58842 ']' 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.931 20:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.931 [2024-12-06 20:32:36.908399] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:19.931 [2024-12-06 20:32:36.908523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58842 ] 00:05:20.190 [2024-12-06 20:32:37.068370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.190 [2024-12-06 20:32:37.168379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.761 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.761 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:20.761 20:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58842 00:05:20.761 20:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.761 20:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58842 00:05:21.021 20:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58842 00:05:21.021 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58842 ']' 00:05:21.021 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58842 00:05:21.021 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:21.021 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.021 20:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58842 00:05:21.021 20:32:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.021 20:32:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.021 killing process with pid 58842 00:05:21.021 20:32:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58842' 00:05:21.021 20:32:38 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58842 00:05:21.021 20:32:38 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58842 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58842 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58842 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58842 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58842 ']' 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58842) - No such process 00:05:22.923 ERROR: process (pid: 58842) is no longer running 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.923 00:05:22.923 real 0m2.713s 00:05:22.923 user 0m2.735s 00:05:22.923 sys 0m0.440s 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.923 ************************************ 00:05:22.923 END TEST default_locks 00:05:22.923 20:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 ************************************ 00:05:22.923 20:32:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.923 20:32:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.923 20:32:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.923 20:32:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 ************************************ 00:05:22.923 START TEST default_locks_via_rpc 00:05:22.923 ************************************ 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58906 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58906 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58906 ']' 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.923 20:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 [2024-12-06 20:32:39.662665] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:22.923 [2024-12-06 20:32:39.662821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58906 ] 00:05:22.923 [2024-12-06 20:32:39.831688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.923 [2024-12-06 20:32:39.931992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58906 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.491 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58906 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58906 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58906 ']' 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58906 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58906 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.750 killing process with pid 58906 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58906' 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58906 00:05:23.750 20:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58906 00:05:25.123 00:05:25.123 real 0m2.582s 00:05:25.123 user 0m2.563s 00:05:25.123 sys 0m0.434s 00:05:25.123 20:32:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.123 20:32:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.123 ************************************ 00:05:25.123 END TEST default_locks_via_rpc 00:05:25.123 ************************************ 00:05:25.123 20:32:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.123 20:32:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.123 20:32:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.123 20:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.123 ************************************ 00:05:25.123 START TEST non_locking_app_on_locked_coremask 00:05:25.123 ************************************ 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58958 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58958 /var/tmp/spdk.sock 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58958 ']' 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.123 20:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.382 [2024-12-06 20:32:42.272220] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:25.382 [2024-12-06 20:32:42.272870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58958 ] 00:05:25.382 [2024-12-06 20:32:42.428862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.641 [2024-12-06 20:32:42.532739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58974 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58974 /var/tmp/spdk2.sock 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.208 20:32:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.208 [2024-12-06 20:32:43.212668] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:26.208 [2024-12-06 20:32:43.213112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:05:26.467 [2024-12-06 20:32:43.386244] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.467 [2024-12-06 20:32:43.386305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.467 [2024-12-06 20:32:43.593669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.839 20:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.839 20:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.839 20:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58958 00:05:27.839 20:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58958 00:05:27.839 20:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58958 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58958 ']' 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58958 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58958 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.097 killing process with pid 58958 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58958' 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58958 00:05:28.097 20:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58958 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58974 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58974 ']' 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58974 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58974 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.375 killing process with pid 58974 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58974' 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58974 00:05:31.375 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58974 00:05:31.943 00:05:31.943 real 0m6.821s 00:05:31.943 user 0m7.094s 00:05:31.943 sys 0m0.890s 00:05:31.943 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.943 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.943 ************************************ 00:05:31.943 END TEST non_locking_app_on_locked_coremask 00:05:31.943 ************************************ 00:05:31.943 20:32:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:31.943 20:32:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.943 20:32:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.943 20:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.943 ************************************ 00:05:31.943 START TEST locking_app_on_unlocked_coremask 00:05:31.943 ************************************ 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59076 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59076 /var/tmp/spdk.sock 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59076 ']' 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.943 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.201 [2024-12-06 20:32:49.122004] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:32.201 [2024-12-06 20:32:49.122104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59076 ] 00:05:32.201 [2024-12-06 20:32:49.263972] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.201 [2024-12-06 20:32:49.264026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.460 [2024-12-06 20:32:49.351095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59092 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59092 /var/tmp/spdk2.sock 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59092 ']' 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.027 20:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.027 [2024-12-06 20:32:50.012074] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:33.027 [2024-12-06 20:32:50.012207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:05:33.285 [2024-12-06 20:32:50.174477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.285 [2024-12-06 20:32:50.351245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.220 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.220 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.220 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59092 00:05:34.220 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59092 00:05:34.220 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59076 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59076 ']' 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59076 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59076 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.785 killing process with pid 59076 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59076' 00:05:34.785 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59076 00:05:34.786 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59076 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59092 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59092 ']' 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59092 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59092 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.354 killing process with pid 59092 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59092' 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59092 00:05:37.354 20:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59092 00:05:38.728 00:05:38.728 real 0m6.393s 00:05:38.728 user 0m6.544s 00:05:38.728 sys 0m0.836s 00:05:38.728 20:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.728 20:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.728 ************************************ 00:05:38.728 END TEST locking_app_on_unlocked_coremask 00:05:38.728 ************************************ 00:05:38.728 20:32:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:38.728 20:32:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.729 20:32:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.729 20:32:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.729 ************************************ 00:05:38.729 START TEST locking_app_on_locked_coremask 00:05:38.729 ************************************ 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59183 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59183 /var/tmp/spdk.sock 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59183 ']' 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.729 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.729 [2024-12-06 20:32:55.559725] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:38.729 [2024-12-06 20:32:55.559833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:05:38.729 [2024-12-06 20:32:55.711182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.729 [2024-12-06 20:32:55.799297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59199 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59199 /var/tmp/spdk2.sock 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59199 /var/tmp/spdk2.sock 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:39.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59199 /var/tmp/spdk2.sock 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59199 ']' 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.660 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.660 [2024-12-06 20:32:56.543423] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:39.660 [2024-12-06 20:32:56.544148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:05:39.660 [2024-12-06 20:32:56.726827] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59183 has claimed it. 00:05:39.660 [2024-12-06 20:32:56.726913] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.225 ERROR: process (pid: 59199) is no longer running 00:05:40.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59199) - No such process 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59183 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59183 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59183 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59183 ']' 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59183 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59183 00:05:40.225 killing process with pid 59183 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59183' 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59183 00:05:40.225 20:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59183 00:05:41.602 00:05:41.602 real 0m3.074s 00:05:41.602 user 0m3.334s 00:05:41.602 sys 0m0.576s 00:05:41.602 20:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.602 20:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.602 ************************************ 00:05:41.602 END TEST locking_app_on_locked_coremask 00:05:41.602 ************************************ 00:05:41.602 20:32:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.602 20:32:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.602 20:32:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.602 20:32:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.602 ************************************ 00:05:41.602 START TEST locking_overlapped_coremask 00:05:41.602 ************************************ 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59252 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59252 /var/tmp/spdk.sock 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59252 ']' 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.602 20:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.602 [2024-12-06 20:32:58.678930] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:41.602 [2024-12-06 20:32:58.679058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59252 ] 00:05:41.860 [2024-12-06 20:32:58.835797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.860 [2024-12-06 20:32:58.927998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.860 [2024-12-06 20:32:58.928221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.860 [2024-12-06 20:32:58.928316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59270 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59270 /var/tmp/spdk2.sock 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59270 /var/tmp/spdk2.sock 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59270 /var/tmp/spdk2.sock 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59270 ']' 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.459 20:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.761 [2024-12-06 20:32:59.606854] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:42.761 [2024-12-06 20:32:59.607674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59270 ] 00:05:42.761 [2024-12-06 20:32:59.820002] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59252 has claimed it. 00:05:42.761 [2024-12-06 20:32:59.820070] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.337 ERROR: process (pid: 59270) is no longer running 00:05:43.337 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59270) - No such process 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59252 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59252 ']' 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59252 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59252 00:05:43.337 killing process with pid 59252 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59252' 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59252 00:05:43.337 20:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59252 00:05:44.706 00:05:44.706 real 0m3.115s 00:05:44.706 user 0m8.601s 00:05:44.706 sys 0m0.472s 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.706 ************************************ 00:05:44.706 END TEST locking_overlapped_coremask 00:05:44.706 ************************************ 00:05:44.706 20:33:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.706 20:33:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.706 20:33:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.706 20:33:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.706 ************************************ 00:05:44.706 START TEST locking_overlapped_coremask_via_rpc 00:05:44.706 ************************************ 00:05:44.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59323 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59323 /var/tmp/spdk.sock 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59323 ']' 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.706 20:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.706 [2024-12-06 20:33:01.834807] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:44.706 [2024-12-06 20:33:01.834994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:05:44.965 [2024-12-06 20:33:01.997550] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.965 [2024-12-06 20:33:01.997611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.965 [2024-12-06 20:33:02.089071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.965 [2024-12-06 20:33:02.089276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.965 [2024-12-06 20:33:02.089624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59341 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59341 /var/tmp/spdk2.sock 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59341 ']' 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.898 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.899 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.899 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.899 20:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.899 [2024-12-06 20:33:02.742050] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:45.899 [2024-12-06 20:33:02.742607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59341 ] 00:05:45.899 [2024-12-06 20:33:02.912281] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.899 [2024-12-06 20:33:02.912355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.156 [2024-12-06 20:33:03.124672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.156 [2024-12-06 20:33:03.124765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.156 [2024-12-06 20:33:03.124788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.525 [2024-12-06 20:33:04.288046] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59323 has claimed it. 00:05:47.525 request: 00:05:47.525 { 00:05:47.525 "method": "framework_enable_cpumask_locks", 00:05:47.525 "req_id": 1 00:05:47.525 } 00:05:47.525 Got JSON-RPC error response 00:05:47.525 response: 00:05:47.525 { 00:05:47.525 "code": -32603, 00:05:47.525 "message": "Failed to claim CPU core: 2" 00:05:47.525 } 00:05:47.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59323 /var/tmp/spdk.sock 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59323 ']' 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.525 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59341 /var/tmp/spdk2.sock 00:05:47.526 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59341 ']' 00:05:47.526 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.526 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.526 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.526 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.526 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.823 ************************************ 00:05:47.823 END TEST locking_overlapped_coremask_via_rpc 00:05:47.823 ************************************ 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.823 00:05:47.823 real 0m2.981s 00:05:47.823 user 0m1.100s 00:05:47.823 sys 0m0.120s 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.823 20:33:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.823 20:33:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:47.823 20:33:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59323 ]] 00:05:47.823 20:33:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59323 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59323 ']' 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59323 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59323 00:05:47.823 killing process with pid 59323 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59323' 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59323 00:05:47.823 20:33:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59323 00:05:49.222 20:33:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59341 ]] 00:05:49.222 20:33:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59341 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59341 ']' 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59341 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59341 00:05:49.222 killing process with pid 59341 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59341' 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59341 00:05:49.222 20:33:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59341 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59323 ]] 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59323 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59323 ']' 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59323 00:05:50.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59323) - No such process 00:05:50.594 Process with pid 59323 is not found 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59323 is not found' 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59341 ]] 00:05:50.594 Process with pid 59341 is not found 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59341 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59341 ']' 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59341 00:05:50.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59341) - No such process 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59341 is not found' 00:05:50.594 20:33:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.594 ************************************ 00:05:50.594 END TEST cpu_locks 00:05:50.594 ************************************ 00:05:50.594 00:05:50.594 real 0m30.626s 00:05:50.594 user 0m52.810s 00:05:50.594 sys 0m4.555s 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.594 20:33:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 ************************************ 00:05:50.594 END TEST event 00:05:50.594 ************************************ 00:05:50.594 00:05:50.594 real 0m56.940s 00:05:50.594 user 1m44.975s 00:05:50.594 sys 0m7.422s 00:05:50.594 20:33:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.594 20:33:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 20:33:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.594 20:33:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.594 20:33:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.594 20:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 ************************************ 00:05:50.594 START TEST thread 00:05:50.594 ************************************ 00:05:50.594 20:33:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.594 * Looking for test storage... 00:05:50.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:50.594 20:33:07 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.594 20:33:07 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.594 20:33:07 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.594 20:33:07 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.594 20:33:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.594 20:33:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.594 20:33:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.594 20:33:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.594 20:33:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.594 20:33:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.595 20:33:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.595 20:33:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.595 20:33:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.595 20:33:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.595 20:33:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.595 20:33:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:50.595 20:33:07 thread -- scripts/common.sh@345 -- # : 1 00:05:50.595 20:33:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.595 20:33:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.595 20:33:07 thread -- scripts/common.sh@365 -- # decimal 1 00:05:50.595 20:33:07 thread -- scripts/common.sh@353 -- # local d=1 00:05:50.595 20:33:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.595 20:33:07 thread -- scripts/common.sh@355 -- # echo 1 00:05:50.595 20:33:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.595 20:33:07 thread -- scripts/common.sh@366 -- # decimal 2 00:05:50.595 20:33:07 thread -- scripts/common.sh@353 -- # local d=2 00:05:50.595 20:33:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.595 20:33:07 thread -- scripts/common.sh@355 -- # echo 2 00:05:50.595 20:33:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.595 20:33:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.595 20:33:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.595 20:33:07 thread -- scripts/common.sh@368 -- # return 0 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.595 --rc genhtml_branch_coverage=1 00:05:50.595 --rc genhtml_function_coverage=1 00:05:50.595 --rc genhtml_legend=1 00:05:50.595 --rc geninfo_all_blocks=1 00:05:50.595 --rc geninfo_unexecuted_blocks=1 00:05:50.595 00:05:50.595 ' 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.595 --rc genhtml_branch_coverage=1 00:05:50.595 --rc genhtml_function_coverage=1 00:05:50.595 --rc genhtml_legend=1 00:05:50.595 --rc geninfo_all_blocks=1 00:05:50.595 --rc geninfo_unexecuted_blocks=1 00:05:50.595 00:05:50.595 ' 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.595 --rc genhtml_branch_coverage=1 00:05:50.595 --rc genhtml_function_coverage=1 00:05:50.595 --rc genhtml_legend=1 00:05:50.595 --rc geninfo_all_blocks=1 00:05:50.595 --rc geninfo_unexecuted_blocks=1 00:05:50.595 00:05:50.595 ' 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.595 --rc genhtml_branch_coverage=1 00:05:50.595 --rc genhtml_function_coverage=1 00:05:50.595 --rc genhtml_legend=1 00:05:50.595 --rc geninfo_all_blocks=1 00:05:50.595 --rc geninfo_unexecuted_blocks=1 00:05:50.595 00:05:50.595 ' 00:05:50.595 20:33:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.595 20:33:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.595 ************************************ 00:05:50.595 START TEST thread_poller_perf 00:05:50.595 ************************************ 00:05:50.595 20:33:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.595 [2024-12-06 20:33:07.548794] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:50.595 [2024-12-06 20:33:07.548979] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59501 ] 00:05:50.595 [2024-12-06 20:33:07.725017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.853 [2024-12-06 20:33:07.876597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.853 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.229 [2024-12-06T20:33:09.362Z] ====================================== 00:05:52.229 [2024-12-06T20:33:09.362Z] busy:2613261076 (cyc) 00:05:52.229 [2024-12-06T20:33:09.362Z] total_run_count: 311000 00:05:52.229 [2024-12-06T20:33:09.362Z] tsc_hz: 2600000000 (cyc) 00:05:52.229 [2024-12-06T20:33:09.362Z] ====================================== 00:05:52.229 [2024-12-06T20:33:09.362Z] poller_cost: 8402 (cyc), 3231 (nsec) 00:05:52.229 00:05:52.229 real 0m1.505s 00:05:52.229 user 0m1.305s 00:05:52.229 sys 0m0.091s 00:05:52.229 20:33:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.229 20:33:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.229 ************************************ 00:05:52.229 END TEST thread_poller_perf 00:05:52.229 ************************************ 00:05:52.229 20:33:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.229 20:33:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:52.229 20:33:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.229 20:33:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.229 ************************************ 00:05:52.229 START TEST thread_poller_perf 00:05:52.229 ************************************ 00:05:52.229 20:33:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.229 [2024-12-06 20:33:09.089089] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:52.229 [2024-12-06 20:33:09.089364] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59538 ] 00:05:52.229 [2024-12-06 20:33:09.246184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.229 [2024-12-06 20:33:09.331225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.229 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:53.610 [2024-12-06T20:33:10.743Z] ====================================== 00:05:53.610 [2024-12-06T20:33:10.743Z] busy:2602721788 (cyc) 00:05:53.610 [2024-12-06T20:33:10.743Z] total_run_count: 4724000 00:05:53.610 [2024-12-06T20:33:10.743Z] tsc_hz: 2600000000 (cyc) 00:05:53.610 [2024-12-06T20:33:10.743Z] ====================================== 00:05:53.610 [2024-12-06T20:33:10.743Z] poller_cost: 550 (cyc), 211 (nsec) 00:05:53.610 00:05:53.610 real 0m1.402s 00:05:53.610 user 0m1.227s 00:05:53.610 sys 0m0.068s 00:05:53.610 20:33:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.610 ************************************ 00:05:53.610 END TEST thread_poller_perf 00:05:53.610 ************************************ 00:05:53.610 20:33:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.610 20:33:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.610 ************************************ 00:05:53.610 END TEST thread 00:05:53.610 ************************************ 00:05:53.610 00:05:53.610 real 0m3.124s 00:05:53.610 user 0m2.625s 00:05:53.610 sys 0m0.279s 00:05:53.610 20:33:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.610 20:33:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.610 20:33:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:53.610 20:33:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:53.610 20:33:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.610 20:33:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.610 20:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:53.610 ************************************ 00:05:53.610 START TEST app_cmdline 00:05:53.610 ************************************ 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:53.610 * Looking for test storage... 00:05:53.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:53.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.610 20:33:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.610 --rc genhtml_branch_coverage=1 00:05:53.610 --rc genhtml_function_coverage=1 00:05:53.610 --rc genhtml_legend=1 00:05:53.610 --rc geninfo_all_blocks=1 00:05:53.610 --rc geninfo_unexecuted_blocks=1 00:05:53.610 00:05:53.610 ' 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.610 --rc genhtml_branch_coverage=1 00:05:53.610 --rc genhtml_function_coverage=1 00:05:53.610 --rc genhtml_legend=1 00:05:53.610 --rc geninfo_all_blocks=1 00:05:53.610 --rc geninfo_unexecuted_blocks=1 00:05:53.610 00:05:53.610 ' 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.610 --rc genhtml_branch_coverage=1 00:05:53.610 --rc genhtml_function_coverage=1 00:05:53.610 --rc genhtml_legend=1 00:05:53.610 --rc geninfo_all_blocks=1 00:05:53.610 --rc geninfo_unexecuted_blocks=1 00:05:53.610 00:05:53.610 ' 00:05:53.610 20:33:10 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.610 --rc genhtml_branch_coverage=1 00:05:53.610 --rc genhtml_function_coverage=1 00:05:53.610 --rc genhtml_legend=1 00:05:53.610 --rc geninfo_all_blocks=1 00:05:53.610 --rc geninfo_unexecuted_blocks=1 00:05:53.610 00:05:53.610 ' 00:05:53.610 20:33:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:53.611 20:33:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59621 00:05:53.611 20:33:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59621 00:05:53.611 20:33:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59621 ']' 00:05:53.611 20:33:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.611 20:33:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.611 20:33:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.611 20:33:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.611 20:33:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.611 20:33:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:53.611 [2024-12-06 20:33:10.732913] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:53.611 [2024-12-06 20:33:10.733015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:05:53.869 [2024-12-06 20:33:10.887030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.869 [2024-12-06 20:33:10.976665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.439 20:33:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.439 20:33:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:54.439 20:33:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:54.700 { 00:05:54.700 "version": "SPDK v25.01-pre git sha1 0354bb8e8", 00:05:54.700 "fields": { 00:05:54.700 "major": 25, 00:05:54.700 "minor": 1, 00:05:54.700 "patch": 0, 00:05:54.700 "suffix": "-pre", 00:05:54.700 "commit": "0354bb8e8" 00:05:54.700 } 00:05:54.700 } 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:54.700 20:33:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:54.700 20:33:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.961 request: 00:05:54.961 { 00:05:54.961 "method": "env_dpdk_get_mem_stats", 00:05:54.961 "req_id": 1 00:05:54.961 } 00:05:54.961 Got JSON-RPC error response 00:05:54.961 response: 00:05:54.961 { 00:05:54.961 "code": -32601, 00:05:54.961 "message": "Method not found" 00:05:54.961 } 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.961 20:33:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59621 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59621 ']' 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59621 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59621 00:05:54.961 killing process with pid 59621 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59621' 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@973 -- # kill 59621 00:05:54.961 20:33:11 app_cmdline -- common/autotest_common.sh@978 -- # wait 59621 00:05:56.335 ************************************ 00:05:56.335 END TEST app_cmdline 00:05:56.335 ************************************ 00:05:56.335 00:05:56.335 real 0m2.616s 00:05:56.335 user 0m2.840s 00:05:56.335 sys 0m0.396s 00:05:56.335 20:33:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.335 20:33:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.335 20:33:13 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:56.335 20:33:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.335 20:33:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.335 20:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:56.335 ************************************ 00:05:56.335 START TEST version 00:05:56.335 ************************************ 00:05:56.335 20:33:13 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:56.335 * Looking for test storage... 00:05:56.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:56.335 20:33:13 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.335 20:33:13 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.335 20:33:13 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.335 20:33:13 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.335 20:33:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.335 20:33:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.335 20:33:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.335 20:33:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.335 20:33:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.335 20:33:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.335 20:33:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.335 20:33:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.335 20:33:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.335 20:33:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.335 20:33:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.335 20:33:13 version -- scripts/common.sh@344 -- # case "$op" in 00:05:56.335 20:33:13 version -- scripts/common.sh@345 -- # : 1 00:05:56.335 20:33:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.335 20:33:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.335 20:33:13 version -- scripts/common.sh@365 -- # decimal 1 00:05:56.335 20:33:13 version -- scripts/common.sh@353 -- # local d=1 00:05:56.335 20:33:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.335 20:33:13 version -- scripts/common.sh@355 -- # echo 1 00:05:56.335 20:33:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.335 20:33:13 version -- scripts/common.sh@366 -- # decimal 2 00:05:56.335 20:33:13 version -- scripts/common.sh@353 -- # local d=2 00:05:56.335 20:33:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.335 20:33:13 version -- scripts/common.sh@355 -- # echo 2 00:05:56.335 20:33:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.335 20:33:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.335 20:33:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.335 20:33:13 version -- scripts/common.sh@368 -- # return 0 00:05:56.336 20:33:13 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.336 20:33:13 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.336 --rc genhtml_branch_coverage=1 00:05:56.336 --rc genhtml_function_coverage=1 00:05:56.336 --rc genhtml_legend=1 00:05:56.336 --rc geninfo_all_blocks=1 00:05:56.336 --rc geninfo_unexecuted_blocks=1 00:05:56.336 00:05:56.336 ' 00:05:56.336 20:33:13 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.336 --rc genhtml_branch_coverage=1 00:05:56.336 --rc genhtml_function_coverage=1 00:05:56.336 --rc genhtml_legend=1 00:05:56.336 --rc geninfo_all_blocks=1 00:05:56.336 --rc geninfo_unexecuted_blocks=1 00:05:56.336 00:05:56.336 ' 00:05:56.336 20:33:13 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.336 --rc genhtml_branch_coverage=1 00:05:56.336 --rc genhtml_function_coverage=1 00:05:56.336 --rc genhtml_legend=1 00:05:56.336 --rc geninfo_all_blocks=1 00:05:56.336 --rc geninfo_unexecuted_blocks=1 00:05:56.336 00:05:56.336 ' 00:05:56.336 20:33:13 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.336 --rc genhtml_branch_coverage=1 00:05:56.336 --rc genhtml_function_coverage=1 00:05:56.336 --rc genhtml_legend=1 00:05:56.336 --rc geninfo_all_blocks=1 00:05:56.336 --rc geninfo_unexecuted_blocks=1 00:05:56.336 00:05:56.336 ' 00:05:56.336 20:33:13 version -- app/version.sh@17 -- # get_header_version major 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.336 20:33:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # cut -f2 00:05:56.336 20:33:13 version -- app/version.sh@17 -- # major=25 00:05:56.336 20:33:13 version -- app/version.sh@18 -- # get_header_version minor 00:05:56.336 20:33:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # cut -f2 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.336 20:33:13 version -- app/version.sh@18 -- # minor=1 00:05:56.336 20:33:13 version -- app/version.sh@19 -- # get_header_version patch 00:05:56.336 20:33:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # cut -f2 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.336 20:33:13 version -- app/version.sh@19 -- # patch=0 00:05:56.336 20:33:13 version -- app/version.sh@20 -- # get_header_version suffix 00:05:56.336 20:33:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # cut -f2 00:05:56.336 20:33:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.336 20:33:13 version -- app/version.sh@20 -- # suffix=-pre 00:05:56.336 20:33:13 version -- app/version.sh@22 -- # version=25.1 00:05:56.336 20:33:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:56.336 20:33:13 version -- app/version.sh@28 -- # version=25.1rc0 00:05:56.336 20:33:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:56.336 20:33:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:56.336 20:33:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:56.336 20:33:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:56.336 00:05:56.336 real 0m0.195s 00:05:56.336 user 0m0.131s 00:05:56.336 sys 0m0.090s 00:05:56.336 ************************************ 00:05:56.336 END TEST version 00:05:56.336 ************************************ 00:05:56.336 20:33:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.336 20:33:13 version -- common/autotest_common.sh@10 -- # set +x 00:05:56.336 20:33:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:56.336 20:33:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:56.336 20:33:13 -- spdk/autotest.sh@194 -- # uname -s 00:05:56.336 20:33:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:56.336 20:33:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:56.336 20:33:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:56.336 20:33:13 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:56.336 20:33:13 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:56.336 20:33:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:56.336 20:33:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.336 20:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:56.336 ************************************ 00:05:56.336 START TEST blockdev_nvme 00:05:56.336 ************************************ 00:05:56.336 20:33:13 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:56.594 * Looking for test storage... 00:05:56.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:56.594 20:33:13 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.594 20:33:13 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.594 20:33:13 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.594 20:33:13 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:56.594 20:33:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.595 20:33:13 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.595 --rc genhtml_branch_coverage=1 00:05:56.595 --rc genhtml_function_coverage=1 00:05:56.595 --rc genhtml_legend=1 00:05:56.595 --rc geninfo_all_blocks=1 00:05:56.595 --rc geninfo_unexecuted_blocks=1 00:05:56.595 00:05:56.595 ' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.595 --rc genhtml_branch_coverage=1 00:05:56.595 --rc genhtml_function_coverage=1 00:05:56.595 --rc genhtml_legend=1 00:05:56.595 --rc geninfo_all_blocks=1 00:05:56.595 --rc geninfo_unexecuted_blocks=1 00:05:56.595 00:05:56.595 ' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.595 --rc genhtml_branch_coverage=1 00:05:56.595 --rc genhtml_function_coverage=1 00:05:56.595 --rc genhtml_legend=1 00:05:56.595 --rc geninfo_all_blocks=1 00:05:56.595 --rc geninfo_unexecuted_blocks=1 00:05:56.595 00:05:56.595 ' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.595 --rc genhtml_branch_coverage=1 00:05:56.595 --rc genhtml_function_coverage=1 00:05:56.595 --rc genhtml_legend=1 00:05:56.595 --rc geninfo_all_blocks=1 00:05:56.595 --rc geninfo_unexecuted_blocks=1 00:05:56.595 00:05:56.595 ' 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:56.595 20:33:13 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:56.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59788 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59788 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59788 ']' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.595 20:33:13 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.595 20:33:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.595 [2024-12-06 20:33:13.663544] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:56.595 [2024-12-06 20:33:13.663662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:05:56.853 [2024-12-06 20:33:13.826996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.853 [2024-12-06 20:33:13.929041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.465 20:33:14 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.465 20:33:14 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:57.465 20:33:14 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:57.465 20:33:14 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:57.465 20:33:14 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:57.465 20:33:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:57.465 20:33:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:57.465 20:33:14 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:57.465 20:33:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.465 20:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.726 20:33:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.726 20:33:14 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:57.726 20:33:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.726 20:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a88ba2d1-4c18-46cf-ae70-138345b99e2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a88ba2d1-4c18-46cf-ae70-138345b99e2f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b7729ea7-d82b-4ed5-8990-f205f28a0094"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b7729ea7-d82b-4ed5-8990-f205f28a0094",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d4291da6-a856-4c4d-8f79-c7b9f612819b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d4291da6-a856-4c4d-8f79-c7b9f612819b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2412aa75-cb8b-4009-8b2f-c4c0d1e3241f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2412aa75-cb8b-4009-8b2f-c4c0d1e3241f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "86bc3071-e04b-4c42-85a6-acb091154de3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "86bc3071-e04b-4c42-85a6-acb091154de3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9f28ce29-27e0-4b14-b0ee-fd2984d7569f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9f28ce29-27e0-4b14-b0ee-fd2984d7569f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:57.987 20:33:14 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59788 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59788 ']' 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59788 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.987 20:33:14 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59788 00:05:57.987 killing process with pid 59788 00:05:57.987 20:33:15 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.987 20:33:15 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.987 20:33:15 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59788' 00:05:57.987 20:33:15 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59788 00:05:57.987 20:33:15 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59788 00:05:59.895 20:33:16 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:59.895 20:33:16 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:59.895 20:33:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:59.895 20:33:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.895 20:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.895 ************************************ 00:05:59.895 START TEST bdev_hello_world 00:05:59.895 ************************************ 00:05:59.895 20:33:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:59.895 [2024-12-06 20:33:16.569044] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:05:59.895 [2024-12-06 20:33:16.569176] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:05:59.895 [2024-12-06 20:33:16.729276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.895 [2024-12-06 20:33:16.881114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.467 [2024-12-06 20:33:17.430855] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:00.467 [2024-12-06 20:33:17.431090] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:00.467 [2024-12-06 20:33:17.431122] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:00.467 [2024-12-06 20:33:17.433617] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:00.467 [2024-12-06 20:33:17.434155] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:00.467 [2024-12-06 20:33:17.434278] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:00.467 [2024-12-06 20:33:17.434556] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:00.467 00:06:00.467 [2024-12-06 20:33:17.434584] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:01.404 ************************************ 00:06:01.404 END TEST bdev_hello_world 00:06:01.404 ************************************ 00:06:01.404 00:06:01.404 real 0m1.663s 00:06:01.404 user 0m1.377s 00:06:01.404 sys 0m0.176s 00:06:01.404 20:33:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.404 20:33:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:01.404 20:33:18 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:01.404 20:33:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:01.404 20:33:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.404 20:33:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.404 ************************************ 00:06:01.404 START TEST bdev_bounds 00:06:01.404 ************************************ 00:06:01.404 Process bdevio pid: 59914 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59914 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59914' 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59914 00:06:01.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59914 ']' 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.404 20:33:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:01.405 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.405 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.405 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.405 20:33:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:01.405 [2024-12-06 20:33:18.275169] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:01.405 [2024-12-06 20:33:18.275789] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:06:01.405 [2024-12-06 20:33:18.436093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.666 [2024-12-06 20:33:18.542053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.666 [2024-12-06 20:33:18.542128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.666 [2024-12-06 20:33:18.542130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.234 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.234 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:02.234 20:33:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:02.234 I/O targets: 00:06:02.234 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:02.234 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:02.234 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:02.234 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:02.234 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:02.234 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:02.234 00:06:02.234 00:06:02.234 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.234 http://cunit.sourceforge.net/ 00:06:02.234 00:06:02.234 00:06:02.234 Suite: bdevio tests on: Nvme3n1 00:06:02.234 Test: blockdev write read block ...passed 00:06:02.234 Test: blockdev write zeroes read block ...passed 00:06:02.234 Test: blockdev write zeroes read no split ...passed 00:06:02.234 Test: blockdev write zeroes read split ...passed 00:06:02.234 Test: blockdev write zeroes read split partial ...passed 00:06:02.234 Test: blockdev reset ...[2024-12-06 20:33:19.269402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:02.234 [2024-12-06 20:33:19.272144] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:02.234 Test: blockdev write read 8 blocks ...uccessful. 00:06:02.234 passed 00:06:02.234 Test: blockdev write read size > 128k ...passed 00:06:02.234 Test: blockdev write read invalid size ...passed 00:06:02.234 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.234 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.234 Test: blockdev write read max offset ...passed 00:06:02.234 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.234 Test: blockdev writev readv 8 blocks ...passed 00:06:02.234 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.234 Test: blockdev writev readv block ...passed 00:06:02.234 Test: blockdev writev readv size > 128k ...passed 00:06:02.234 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.234 Test: blockdev comparev and writev ...[2024-12-06 20:33:19.278987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c200a000 len:0x1000 00:06:02.234 [2024-12-06 20:33:19.279033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:02.234 passed 00:06:02.234 Test: blockdev nvme passthru rw ...passed 00:06:02.234 Test: blockdev nvme passthru vendor specific ...passed 00:06:02.234 Test: blockdev nvme admin passthru ...[2024-12-06 20:33:19.279581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:02.234 [2024-12-06 20:33:19.279610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:02.234 passed 00:06:02.234 Test: blockdev copy ...passed 00:06:02.234 Suite: bdevio tests on: Nvme2n3 00:06:02.234 Test: blockdev write read block ...passed 00:06:02.234 Test: blockdev write zeroes read block ...passed 00:06:02.234 Test: blockdev write zeroes read no split ...passed 00:06:02.234 Test: blockdev write zeroes read split ...passed 00:06:02.234 Test: blockdev write zeroes read split partial ...passed 00:06:02.235 Test: blockdev reset ...[2024-12-06 20:33:19.323990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:02.235 [2024-12-06 20:33:19.326882] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:02.235 Test: blockdev write read 8 blocks ...uccessful. 00:06:02.235 passed 00:06:02.235 Test: blockdev write read size > 128k ...passed 00:06:02.235 Test: blockdev write read invalid size ...passed 00:06:02.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.235 Test: blockdev write read max offset ...passed 00:06:02.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.235 Test: blockdev writev readv 8 blocks ...passed 00:06:02.235 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.235 Test: blockdev writev readv block ...passed 00:06:02.235 Test: blockdev writev readv size > 128k ...passed 00:06:02.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.235 Test: blockdev comparev and writev ...[2024-12-06 20:33:19.333005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29d806000 len:0x1000 00:06:02.235 [2024-12-06 20:33:19.333140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:02.235 passed 00:06:02.235 Test: blockdev nvme passthru rw ...passed 00:06:02.235 Test: blockdev nvme passthru vendor specific ...passed 00:06:02.235 Test: blockdev nvme admin passthru ...[2024-12-06 20:33:19.333787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:02.235 [2024-12-06 20:33:19.333814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:02.235 passed 00:06:02.235 Test: blockdev copy ...passed 00:06:02.235 Suite: bdevio tests on: Nvme2n2 00:06:02.235 Test: blockdev write read block ...passed 00:06:02.235 Test: blockdev write zeroes read block ...passed 00:06:02.235 Test: blockdev write zeroes read no split ...passed 00:06:02.235 Test: blockdev write zeroes read split ...passed 00:06:02.495 Test: blockdev write zeroes read split partial ...passed 00:06:02.495 Test: blockdev reset ...[2024-12-06 20:33:19.379916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:02.495 [2024-12-06 20:33:19.383161] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:02.495 Test: blockdev write read 8 blocks ...uccessful. 00:06:02.495 passed 00:06:02.496 Test: blockdev write read size > 128k ...passed 00:06:02.496 Test: blockdev write read invalid size ...passed 00:06:02.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.496 Test: blockdev write read max offset ...passed 00:06:02.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.496 Test: blockdev writev readv 8 blocks ...passed 00:06:02.496 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.496 Test: blockdev writev readv block ...passed 00:06:02.496 Test: blockdev writev readv size > 128k ...passed 00:06:02.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.496 Test: blockdev comparev and writev ...[2024-12-06 20:33:19.389175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dcc3c000 len:0x1000 00:06:02.496 [2024-12-06 20:33:19.389216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:02.496 passed 00:06:02.496 Test: blockdev nvme passthru rw ...passed 00:06:02.496 Test: blockdev nvme passthru vendor specific ...passed 00:06:02.496 Test: blockdev nvme admin passthru ...[2024-12-06 20:33:19.389775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:02.496 [2024-12-06 20:33:19.389799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:02.496 passed 00:06:02.496 Test: blockdev copy ...passed 00:06:02.496 Suite: bdevio tests on: Nvme2n1 00:06:02.496 Test: blockdev write read block ...passed 00:06:02.496 Test: blockdev write zeroes read block ...passed 00:06:02.496 Test: blockdev write zeroes read no split ...passed 00:06:02.496 Test: blockdev write zeroes read split ...passed 00:06:02.496 Test: blockdev write zeroes read split partial ...passed 00:06:02.496 Test: blockdev reset ...[2024-12-06 20:33:19.435126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:02.496 [2024-12-06 20:33:19.438230] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:02.496 Test: blockdev write read 8 blocks ...uccessful. 00:06:02.496 passed 00:06:02.496 Test: blockdev write read size > 128k ...passed 00:06:02.496 Test: blockdev write read invalid size ...passed 00:06:02.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.496 Test: blockdev write read max offset ...passed 00:06:02.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.496 Test: blockdev writev readv 8 blocks ...passed 00:06:02.496 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.496 Test: blockdev writev readv block ...passed 00:06:02.496 Test: blockdev writev readv size > 128k ...passed 00:06:02.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.496 Test: blockdev comparev and writev ...[2024-12-06 20:33:19.444647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:02.496 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2dcc38000 len:0x1000 00:06:02.496 [2024-12-06 20:33:19.444852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:02.496 passed 00:06:02.496 Test: blockdev nvme passthru vendor specific ...passed 00:06:02.496 Test: blockdev nvme admin passthru ...[2024-12-06 20:33:19.445519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:02.496 [2024-12-06 20:33:19.445560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:02.496 passed 00:06:02.496 Test: blockdev copy ...passed 00:06:02.496 Suite: bdevio tests on: Nvme1n1 00:06:02.496 Test: blockdev write read block ...passed 00:06:02.496 Test: blockdev write zeroes read block ...passed 00:06:02.496 Test: blockdev write zeroes read no split ...passed 00:06:02.496 Test: blockdev write zeroes read split ...passed 00:06:02.496 Test: blockdev write zeroes read split partial ...passed 00:06:02.496 Test: blockdev reset ...[2024-12-06 20:33:19.491959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:02.496 [2024-12-06 20:33:19.494808] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:02.496 passed 00:06:02.496 Test: blockdev write read 8 blocks ...passed 00:06:02.496 Test: blockdev write read size > 128k ...passed 00:06:02.496 Test: blockdev write read invalid size ...passed 00:06:02.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.496 Test: blockdev write read max offset ...passed 00:06:02.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.496 Test: blockdev writev readv 8 blocks ...passed 00:06:02.496 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.496 Test: blockdev writev readv block ...passed 00:06:02.496 Test: blockdev writev readv size > 128k ...passed 00:06:02.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.496 Test: blockdev comparev and writev ...[2024-12-06 20:33:19.501626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dcc34000 len:0x1000 00:06:02.496 [2024-12-06 20:33:19.501784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:02.496 passed 00:06:02.496 Test: blockdev nvme passthru rw ...passed 00:06:02.496 Test: blockdev nvme passthru vendor specific ...[2024-12-06 20:33:19.502556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:02.496 [2024-12-06 20:33:19.502631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:02.496 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:02.496 passed 00:06:02.496 Test: blockdev copy ...passed 00:06:02.496 Suite: bdevio tests on: Nvme0n1 00:06:02.496 Test: blockdev write read block ...passed 00:06:02.496 Test: blockdev write zeroes read block ...passed 00:06:02.496 Test: blockdev write zeroes read no split ...passed 00:06:02.496 Test: blockdev write zeroes read split ...passed 00:06:02.496 Test: blockdev write zeroes read split partial ...passed 00:06:02.496 Test: blockdev reset ...[2024-12-06 20:33:19.562438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:02.496 [2024-12-06 20:33:19.565611] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:02.496 passed 00:06:02.496 Test: blockdev write read 8 blocks ...passed 00:06:02.496 Test: blockdev write read size > 128k ...passed 00:06:02.496 Test: blockdev write read invalid size ...passed 00:06:02.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.496 Test: blockdev write read max offset ...passed 00:06:02.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.496 Test: blockdev writev readv 8 blocks ...passed 00:06:02.496 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.496 Test: blockdev writev readv block ...passed 00:06:02.496 Test: blockdev writev readv size > 128k ...passed 00:06:02.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.496 Test: blockdev comparev and writev ...passed 00:06:02.496 Test: blockdev nvme passthru rw ...[2024-12-06 20:33:19.571683] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:02.496 separate metadata which is not supported yet. 00:06:02.496 passed 00:06:02.496 Test: blockdev nvme passthru vendor specific ...[2024-12-06 20:33:19.572025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:02.496 Test: blockdev nvme admin passthru ...passed 00:06:02.496 Test: blockdev copy ...RP2 0x0 00:06:02.497 [2024-12-06 20:33:19.572138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:02.497 passed 00:06:02.497 00:06:02.497 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.497 suites 6 6 n/a 0 0 00:06:02.497 tests 138 138 138 0 0 00:06:02.497 asserts 893 893 893 0 n/a 00:06:02.497 00:06:02.497 Elapsed time = 0.936 seconds 00:06:02.497 0 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59914 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59914 ']' 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59914 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59914 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59914' 00:06:02.497 killing process with pid 59914 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59914 00:06:02.497 20:33:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59914 00:06:03.441 20:33:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:03.441 00:06:03.441 real 0m2.086s 00:06:03.441 user 0m5.333s 00:06:03.441 sys 0m0.293s 00:06:03.441 20:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.441 20:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:03.441 ************************************ 00:06:03.441 END TEST bdev_bounds 00:06:03.441 ************************************ 00:06:03.441 20:33:20 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:03.441 20:33:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:03.441 20:33:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.441 20:33:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.441 ************************************ 00:06:03.441 START TEST bdev_nbd 00:06:03.441 ************************************ 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59968 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59968 /var/tmp/spdk-nbd.sock 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59968 ']' 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.441 20:33:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:03.441 [2024-12-06 20:33:20.405996] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:03.441 [2024-12-06 20:33:20.406358] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.441 [2024-12-06 20:33:20.568860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.702 [2024-12-06 20:33:20.671850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.274 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.275 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.536 1+0 records in 00:06:04.536 1+0 records out 00:06:04.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340371 s, 12.0 MB/s 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.536 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.537 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.537 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.794 1+0 records in 00:06:04.794 1+0 records out 00:06:04.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375908 s, 10.9 MB/s 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.794 20:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.054 1+0 records in 00:06:05.054 1+0 records out 00:06:05.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034218 s, 12.0 MB/s 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.054 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.314 1+0 records in 00:06:05.314 1+0 records out 00:06:05.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501431 s, 8.2 MB/s 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.314 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.572 1+0 records in 00:06:05.572 1+0 records out 00:06:05.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479138 s, 8.5 MB/s 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.572 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.830 1+0 records in 00:06:05.830 1+0 records out 00:06:05.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556797 s, 7.4 MB/s 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd0", 00:06:05.830 "bdev_name": "Nvme0n1" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd1", 00:06:05.830 "bdev_name": "Nvme1n1" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd2", 00:06:05.830 "bdev_name": "Nvme2n1" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd3", 00:06:05.830 "bdev_name": "Nvme2n2" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd4", 00:06:05.830 "bdev_name": "Nvme2n3" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd5", 00:06:05.830 "bdev_name": "Nvme3n1" 00:06:05.830 } 00:06:05.830 ]' 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd0", 00:06:05.830 "bdev_name": "Nvme0n1" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd1", 00:06:05.830 "bdev_name": "Nvme1n1" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd2", 00:06:05.830 "bdev_name": "Nvme2n1" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd3", 00:06:05.830 "bdev_name": "Nvme2n2" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd4", 00:06:05.830 "bdev_name": "Nvme2n3" 00:06:05.830 }, 00:06:05.830 { 00:06:05.830 "nbd_device": "/dev/nbd5", 00:06:05.830 "bdev_name": "Nvme3n1" 00:06:05.830 } 00:06:05.830 ]' 00:06:05.830 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.087 20:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.087 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.344 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.344 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.344 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.345 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.602 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.861 20:33:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:07.119 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:07.119 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.120 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.377 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:07.634 /dev/nbd0 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.634 1+0 records in 00:06:07.634 1+0 records out 00:06:07.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044561 s, 9.2 MB/s 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.634 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:07.891 /dev/nbd1 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.891 1+0 records in 00:06:07.891 1+0 records out 00:06:07.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370275 s, 11.1 MB/s 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.891 20:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:08.148 /dev/nbd10 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.148 1+0 records in 00:06:08.148 1+0 records out 00:06:08.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369024 s, 11.1 MB/s 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.148 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:08.404 /dev/nbd11 00:06:08.404 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.405 1+0 records in 00:06:08.405 1+0 records out 00:06:08.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490984 s, 8.3 MB/s 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.405 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:08.661 /dev/nbd12 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.661 1+0 records in 00:06:08.661 1+0 records out 00:06:08.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580459 s, 7.1 MB/s 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.661 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:08.919 /dev/nbd13 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.919 1+0 records in 00:06:08.919 1+0 records out 00:06:08.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525517 s, 7.8 MB/s 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.919 20:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd0", 00:06:09.176 "bdev_name": "Nvme0n1" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd1", 00:06:09.176 "bdev_name": "Nvme1n1" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd10", 00:06:09.176 "bdev_name": "Nvme2n1" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd11", 00:06:09.176 "bdev_name": "Nvme2n2" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd12", 00:06:09.176 "bdev_name": "Nvme2n3" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd13", 00:06:09.176 "bdev_name": "Nvme3n1" 00:06:09.176 } 00:06:09.176 ]' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd0", 00:06:09.176 "bdev_name": "Nvme0n1" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd1", 00:06:09.176 "bdev_name": "Nvme1n1" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd10", 00:06:09.176 "bdev_name": "Nvme2n1" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd11", 00:06:09.176 "bdev_name": "Nvme2n2" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd12", 00:06:09.176 "bdev_name": "Nvme2n3" 00:06:09.176 }, 00:06:09.176 { 00:06:09.176 "nbd_device": "/dev/nbd13", 00:06:09.176 "bdev_name": "Nvme3n1" 00:06:09.176 } 00:06:09.176 ]' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.176 /dev/nbd1 00:06:09.176 /dev/nbd10 00:06:09.176 /dev/nbd11 00:06:09.176 /dev/nbd12 00:06:09.176 /dev/nbd13' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.176 /dev/nbd1 00:06:09.176 /dev/nbd10 00:06:09.176 /dev/nbd11 00:06:09.176 /dev/nbd12 00:06:09.176 /dev/nbd13' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:09.176 256+0 records in 00:06:09.176 256+0 records out 00:06:09.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115343 s, 90.9 MB/s 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.176 256+0 records in 00:06:09.176 256+0 records out 00:06:09.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667426 s, 15.7 MB/s 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.176 256+0 records in 00:06:09.176 256+0 records out 00:06:09.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0670956 s, 15.6 MB/s 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.176 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:09.434 256+0 records in 00:06:09.434 256+0 records out 00:06:09.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0663217 s, 15.8 MB/s 00:06:09.434 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.434 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:09.434 256+0 records in 00:06:09.434 256+0 records out 00:06:09.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0710976 s, 14.7 MB/s 00:06:09.434 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.434 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:09.434 256+0 records in 00:06:09.434 256+0 records out 00:06:09.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0691734 s, 15.2 MB/s 00:06:09.434 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.434 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:09.434 256+0 records in 00:06:09.434 256+0 records out 00:06:09.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0677126 s, 15.5 MB/s 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.737 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.738 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.997 20:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.997 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.255 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.514 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.774 20:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:11.033 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:11.290 malloc_lvol_verify 00:06:11.291 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:11.548 f1c0c15b-3af2-4522-b07c-6a5887e8c6a8 00:06:11.548 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:11.809 d3f82dbc-ff02-459e-b5ff-3c45e1b2da00 00:06:11.809 20:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:12.072 /dev/nbd0 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:12.072 mke2fs 1.47.0 (5-Feb-2023) 00:06:12.072 Discarding device blocks: 0/4096 done 00:06:12.072 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:12.072 00:06:12.072 Allocating group tables: 0/1 done 00:06:12.072 Writing inode tables: 0/1 done 00:06:12.072 Creating journal (1024 blocks): done 00:06:12.072 Writing superblocks and filesystem accounting information: 0/1 done 00:06:12.072 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.072 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59968 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59968 ']' 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59968 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59968 00:06:12.332 killing process with pid 59968 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59968' 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59968 00:06:12.332 20:33:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59968 00:06:13.266 20:33:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:13.266 00:06:13.266 real 0m9.825s 00:06:13.266 user 0m14.336s 00:06:13.266 sys 0m2.966s 00:06:13.266 20:33:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.266 20:33:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 ************************************ 00:06:13.266 END TEST bdev_nbd 00:06:13.266 ************************************ 00:06:13.266 20:33:30 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:13.266 20:33:30 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:13.266 skipping fio tests on NVMe due to multi-ns failures. 00:06:13.266 20:33:30 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:13.266 20:33:30 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.266 20:33:30 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:13.266 20:33:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:13.266 20:33:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.266 20:33:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 ************************************ 00:06:13.266 START TEST bdev_verify 00:06:13.266 ************************************ 00:06:13.266 20:33:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:13.266 [2024-12-06 20:33:30.266289] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:13.266 [2024-12-06 20:33:30.266419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:06:13.524 [2024-12-06 20:33:30.426320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.524 [2024-12-06 20:33:30.530533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.524 [2024-12-06 20:33:30.530850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.094 Running I/O for 5 seconds... 00:06:16.407 22016.00 IOPS, 86.00 MiB/s [2024-12-06T20:33:34.478Z] 20992.00 IOPS, 82.00 MiB/s [2024-12-06T20:33:35.419Z] 20565.33 IOPS, 80.33 MiB/s [2024-12-06T20:33:36.361Z] 20944.00 IOPS, 81.81 MiB/s [2024-12-06T20:33:36.361Z] 20838.40 IOPS, 81.40 MiB/s 00:06:19.228 Latency(us) 00:06:19.228 [2024-12-06T20:33:36.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.228 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x0 length 0xbd0bd 00:06:19.228 Nvme0n1 : 5.04 1700.82 6.64 0.00 0.00 74893.26 12905.55 77030.01 00:06:19.228 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:19.228 Nvme0n1 : 5.08 1714.36 6.70 0.00 0.00 74472.74 11947.72 75013.51 00:06:19.228 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x0 length 0xa0000 00:06:19.228 Nvme1n1 : 5.07 1703.61 6.65 0.00 0.00 74620.45 13107.20 71383.83 00:06:19.228 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0xa0000 length 0xa0000 00:06:19.228 Nvme1n1 : 5.08 1713.87 6.69 0.00 0.00 74323.98 14821.22 72997.02 00:06:19.228 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x0 length 0x80000 00:06:19.228 Nvme2n1 : 5.09 1711.70 6.69 0.00 0.00 74268.46 10536.17 66947.54 00:06:19.228 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x80000 length 0x80000 00:06:19.228 Nvme2n1 : 5.08 1713.42 6.69 0.00 0.00 74156.34 17140.18 68157.44 00:06:19.228 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x0 length 0x80000 00:06:19.228 Nvme2n2 : 5.09 1711.19 6.68 0.00 0.00 74107.10 10939.47 64527.75 00:06:19.228 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x80000 length 0x80000 00:06:19.228 Nvme2n2 : 5.08 1712.96 6.69 0.00 0.00 74049.49 15123.69 72593.72 00:06:19.228 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x0 length 0x80000 00:06:19.228 Nvme2n3 : 5.09 1710.71 6.68 0.00 0.00 73955.02 11191.53 70173.93 00:06:19.228 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x80000 length 0x80000 00:06:19.228 Nvme2n3 : 5.08 1712.44 6.69 0.00 0.00 73912.56 14821.22 76223.41 00:06:19.228 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x0 length 0x20000 00:06:19.228 Nvme3n1 : 5.09 1710.21 6.68 0.00 0.00 73824.12 8771.74 74610.22 00:06:19.228 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.228 Verification LBA range: start 0x20000 length 0x20000 00:06:19.228 Nvme3n1 : 5.08 1711.97 6.69 0.00 0.00 73766.04 8570.09 74610.22 00:06:19.228 [2024-12-06T20:33:36.361Z] =================================================================================================================== 00:06:19.228 [2024-12-06T20:33:36.361Z] Total : 20527.26 80.18 0.00 0.00 74194.68 8570.09 77030.01 00:06:21.142 00:06:21.142 real 0m7.703s 00:06:21.142 user 0m14.450s 00:06:21.142 sys 0m0.232s 00:06:21.142 ************************************ 00:06:21.142 END TEST bdev_verify 00:06:21.142 ************************************ 00:06:21.142 20:33:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.142 20:33:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:21.143 20:33:37 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:21.143 20:33:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:21.143 20:33:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.143 20:33:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.143 ************************************ 00:06:21.143 START TEST bdev_verify_big_io 00:06:21.143 ************************************ 00:06:21.143 20:33:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:21.143 [2024-12-06 20:33:38.023159] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:21.143 [2024-12-06 20:33:38.023305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60436 ] 00:06:21.143 [2024-12-06 20:33:38.185399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.403 [2024-12-06 20:33:38.293679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.403 [2024-12-06 20:33:38.293701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.973 Running I/O for 5 seconds... 00:06:27.135 2007.00 IOPS, 125.44 MiB/s [2024-12-06T20:33:45.210Z] 2087.50 IOPS, 130.47 MiB/s [2024-12-06T20:33:45.470Z] 2222.67 IOPS, 138.92 MiB/s [2024-12-06T20:33:45.730Z] 2252.00 IOPS, 140.75 MiB/s 00:06:28.597 Latency(us) 00:06:28.597 [2024-12-06T20:33:45.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.597 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.597 Verification LBA range: start 0x0 length 0xbd0b 00:06:28.597 Nvme0n1 : 5.62 91.16 5.70 0.00 0.00 1325361.53 14518.74 1806777.11 00:06:28.598 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:28.598 Nvme0n1 : 5.82 115.22 7.20 0.00 0.00 1052925.06 15426.17 1077613.49 00:06:28.598 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x0 length 0xa000 00:06:28.598 Nvme1n1 : 5.92 97.36 6.09 0.00 0.00 1172219.19 91952.05 1451874.46 00:06:28.598 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0xa000 length 0xa000 00:06:28.598 Nvme1n1 : 5.87 119.85 7.49 0.00 0.00 999972.23 46782.62 993727.41 00:06:28.598 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x0 length 0x8000 00:06:28.598 Nvme2n1 : 5.99 106.77 6.67 0.00 0.00 1019239.66 39724.90 1129235.69 00:06:28.598 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x8000 length 0x8000 00:06:28.598 Nvme2n1 : 5.93 125.88 7.87 0.00 0.00 937354.69 42749.64 916294.10 00:06:28.598 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x0 length 0x8000 00:06:28.598 Nvme2n2 : 6.11 125.66 7.85 0.00 0.00 826007.50 26617.70 1167952.34 00:06:28.598 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x8000 length 0x8000 00:06:28.598 Nvme2n2 : 5.93 125.69 7.86 0.00 0.00 909427.61 42749.64 903388.55 00:06:28.598 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x0 length 0x8000 00:06:28.598 Nvme2n3 : 6.36 177.63 11.10 0.00 0.00 556507.02 11846.89 1200216.22 00:06:28.598 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x8000 length 0x8000 00:06:28.598 Nvme2n3 : 5.93 126.17 7.89 0.00 0.00 877518.95 42951.29 1032444.06 00:06:28.598 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x0 length 0x2000 00:06:28.598 Nvme3n1 : 6.59 287.30 17.96 0.00 0.00 330696.07 913.72 1219574.55 00:06:28.598 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.598 Verification LBA range: start 0x2000 length 0x2000 00:06:28.598 Nvme3n1 : 5.93 134.81 8.43 0.00 0.00 797401.47 3680.10 1122782.92 00:06:28.598 [2024-12-06T20:33:45.731Z] =================================================================================================================== 00:06:28.598 [2024-12-06T20:33:45.731Z] Total : 1633.50 102.09 0.00 0.00 803097.39 913.72 1806777.11 00:06:30.506 00:06:30.506 real 0m9.251s 00:06:30.506 user 0m17.482s 00:06:30.506 sys 0m0.274s 00:06:30.506 ************************************ 00:06:30.506 END TEST bdev_verify_big_io 00:06:30.506 ************************************ 00:06:30.506 20:33:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.506 20:33:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:30.506 20:33:47 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.506 20:33:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:30.506 20:33:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.506 20:33:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.506 ************************************ 00:06:30.506 START TEST bdev_write_zeroes 00:06:30.506 ************************************ 00:06:30.506 20:33:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.506 [2024-12-06 20:33:47.310484] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:30.506 [2024-12-06 20:33:47.310639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60558 ] 00:06:30.506 [2024-12-06 20:33:47.477041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.506 [2024-12-06 20:33:47.580658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.077 Running I/O for 1 seconds... 00:06:32.460 56997.00 IOPS, 222.64 MiB/s 00:06:32.461 Latency(us) 00:06:32.461 [2024-12-06T20:33:49.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.461 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.461 Nvme0n1 : 1.02 9514.04 37.16 0.00 0.00 13427.85 5494.94 29642.44 00:06:32.461 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.461 Nvme1n1 : 1.02 9531.52 37.23 0.00 0.00 13388.39 10284.11 20971.52 00:06:32.461 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.461 Nvme2n1 : 1.02 9522.40 37.20 0.00 0.00 13354.23 9830.40 20971.52 00:06:32.461 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.461 Nvme2n2 : 1.02 9451.24 36.92 0.00 0.00 13380.77 9326.28 25004.50 00:06:32.461 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.461 Nvme2n3 : 1.02 9442.49 36.88 0.00 0.00 13375.95 8318.03 24500.38 00:06:32.461 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.461 Nvme3n1 : 1.02 9433.73 36.85 0.00 0.00 13371.32 7208.96 24500.38 00:06:32.461 [2024-12-06T20:33:49.594Z] =================================================================================================================== 00:06:32.461 [2024-12-06T20:33:49.594Z] Total : 56895.42 222.25 0.00 0.00 13383.09 5494.94 29642.44 00:06:32.722 00:06:32.722 real 0m2.557s 00:06:32.722 user 0m2.255s 00:06:32.722 sys 0m0.185s 00:06:32.722 20:33:49 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.722 20:33:49 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:32.722 ************************************ 00:06:32.722 END TEST bdev_write_zeroes 00:06:32.722 ************************************ 00:06:32.722 20:33:49 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.722 20:33:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:32.722 20:33:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.722 20:33:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.722 ************************************ 00:06:32.722 START TEST bdev_json_nonenclosed 00:06:32.722 ************************************ 00:06:32.722 20:33:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.981 [2024-12-06 20:33:49.896531] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:32.981 [2024-12-06 20:33:49.896655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60611 ] 00:06:32.981 [2024-12-06 20:33:50.054444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.241 [2024-12-06 20:33:50.138214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.241 [2024-12-06 20:33:50.138293] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:33.241 [2024-12-06 20:33:50.138306] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.241 [2024-12-06 20:33:50.138314] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.241 00:06:33.241 real 0m0.454s 00:06:33.241 user 0m0.257s 00:06:33.241 sys 0m0.093s 00:06:33.241 20:33:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.241 20:33:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:33.241 ************************************ 00:06:33.241 END TEST bdev_json_nonenclosed 00:06:33.241 ************************************ 00:06:33.241 20:33:50 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.241 20:33:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:33.241 20:33:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.241 20:33:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.241 ************************************ 00:06:33.241 START TEST bdev_json_nonarray 00:06:33.241 ************************************ 00:06:33.241 20:33:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.500 [2024-12-06 20:33:50.379321] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:33.501 [2024-12-06 20:33:50.379419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:06:33.501 [2024-12-06 20:33:50.530919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.501 [2024-12-06 20:33:50.615361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.501 [2024-12-06 20:33:50.615446] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:33.501 [2024-12-06 20:33:50.615461] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.501 [2024-12-06 20:33:50.615469] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.761 00:06:33.761 real 0m0.428s 00:06:33.761 user 0m0.247s 00:06:33.761 sys 0m0.078s 00:06:33.761 20:33:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.761 20:33:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:33.761 ************************************ 00:06:33.761 END TEST bdev_json_nonarray 00:06:33.761 ************************************ 00:06:33.761 20:33:50 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:33.761 20:33:50 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:33.762 20:33:50 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:33.762 00:06:33.762 real 0m37.364s 00:06:33.762 user 0m58.923s 00:06:33.762 sys 0m5.001s 00:06:33.762 20:33:50 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.762 20:33:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.762 ************************************ 00:06:33.762 END TEST blockdev_nvme 00:06:33.762 ************************************ 00:06:33.762 20:33:50 -- spdk/autotest.sh@209 -- # uname -s 00:06:33.762 20:33:50 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:33.762 20:33:50 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:33.762 20:33:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.762 20:33:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.762 20:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:33.762 ************************************ 00:06:33.762 START TEST blockdev_nvme_gpt 00:06:33.762 ************************************ 00:06:33.762 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:34.023 * Looking for test storage... 00:06:34.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.023 20:33:50 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.023 --rc genhtml_branch_coverage=1 00:06:34.023 --rc genhtml_function_coverage=1 00:06:34.023 --rc genhtml_legend=1 00:06:34.023 --rc geninfo_all_blocks=1 00:06:34.023 --rc geninfo_unexecuted_blocks=1 00:06:34.023 00:06:34.023 ' 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.023 --rc genhtml_branch_coverage=1 00:06:34.023 --rc genhtml_function_coverage=1 00:06:34.023 --rc genhtml_legend=1 00:06:34.023 --rc geninfo_all_blocks=1 00:06:34.023 --rc geninfo_unexecuted_blocks=1 00:06:34.023 00:06:34.023 ' 00:06:34.023 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.023 --rc genhtml_branch_coverage=1 00:06:34.023 --rc genhtml_function_coverage=1 00:06:34.023 --rc genhtml_legend=1 00:06:34.023 --rc geninfo_all_blocks=1 00:06:34.023 --rc geninfo_unexecuted_blocks=1 00:06:34.023 00:06:34.023 ' 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.024 --rc genhtml_branch_coverage=1 00:06:34.024 --rc genhtml_function_coverage=1 00:06:34.024 --rc genhtml_legend=1 00:06:34.024 --rc geninfo_all_blocks=1 00:06:34.024 --rc geninfo_unexecuted_blocks=1 00:06:34.024 00:06:34.024 ' 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60715 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60715 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60715 ']' 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.024 20:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.024 20:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:34.024 [2024-12-06 20:33:51.055680] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:34.024 [2024-12-06 20:33:51.055805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60715 ] 00:06:34.302 [2024-12-06 20:33:51.212003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.302 [2024-12-06 20:33:51.294400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.875 20:33:51 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.875 20:33:51 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:34.875 20:33:51 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:34.875 20:33:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:34.875 20:33:51 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.135 Waiting for block devices as requested 00:06:35.135 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.396 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.396 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.396 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.675 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:40.675 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:40.675 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:40.676 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:40.676 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:40.676 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:40.676 BYT; 00:06:40.676 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:40.676 BYT; 00:06:40.676 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:40.676 20:33:57 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:40.676 20:33:57 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:41.617 The operation has completed successfully. 00:06:41.617 20:33:58 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:42.559 The operation has completed successfully. 00:06:42.559 20:33:59 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.700 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.700 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.700 20:34:00 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:43.700 20:34:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.700 20:34:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.700 [] 00:06:43.700 20:34:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.700 20:34:00 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:43.700 20:34:00 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:43.700 20:34:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:43.701 20:34:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:43.701 20:34:00 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:43.701 20:34:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.701 20:34:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.961 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "38b4ea91-7604-41b6-8620-39a0b872e4d7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "38b4ea91-7604-41b6-8620-39a0b872e4d7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bddc2feb-17e4-4c3a-b49e-30a539e0a1c8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bddc2feb-17e4-4c3a-b49e-30a539e0a1c8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8d1f13d8-0b46-4d96-9d1f-5d60a6663103"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d1f13d8-0b46-4d96-9d1f-5d60a6663103",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9a5f5615-6e40-4cab-9a5d-f6c34397f906"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9a5f5615-6e40-4cab-9a5d-f6c34397f906",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "13a197bf-cfa5-4490-9658-f432062f10ac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "13a197bf-cfa5-4490-9658-f432062f10ac",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:44.220 20:34:01 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60715 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60715 ']' 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60715 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60715 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.220 killing process with pid 60715 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60715' 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60715 00:06:44.220 20:34:01 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60715 00:06:45.619 20:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:45.619 20:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:45.619 20:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:45.619 20:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.619 20:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.619 ************************************ 00:06:45.619 START TEST bdev_hello_world 00:06:45.619 ************************************ 00:06:45.619 20:34:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:45.877 [2024-12-06 20:34:02.761352] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:45.877 [2024-12-06 20:34:02.761469] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:06:45.877 [2024-12-06 20:34:02.922382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.134 [2024-12-06 20:34:03.022966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.699 [2024-12-06 20:34:03.579447] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:46.699 [2024-12-06 20:34:03.579498] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:46.699 [2024-12-06 20:34:03.579523] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:46.699 [2024-12-06 20:34:03.582029] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:46.699 [2024-12-06 20:34:03.582953] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:46.699 [2024-12-06 20:34:03.582994] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:46.699 [2024-12-06 20:34:03.583169] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:46.699 00:06:46.699 [2024-12-06 20:34:03.583191] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:47.265 00:06:47.265 real 0m1.626s 00:06:47.265 user 0m1.337s 00:06:47.265 sys 0m0.179s 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:47.265 ************************************ 00:06:47.265 END TEST bdev_hello_world 00:06:47.265 ************************************ 00:06:47.265 20:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:47.265 20:34:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.265 20:34:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.265 20:34:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.265 ************************************ 00:06:47.265 START TEST bdev_bounds 00:06:47.265 ************************************ 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:47.265 Process bdevio pid: 61373 00:06:47.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61373 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61373' 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61373 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61373 ']' 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:47.265 20:34:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:47.523 [2024-12-06 20:34:04.422801] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:47.523 [2024-12-06 20:34:04.422941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61373 ] 00:06:47.523 [2024-12-06 20:34:04.583616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.781 [2024-12-06 20:34:04.687947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.781 [2024-12-06 20:34:04.688131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.781 [2024-12-06 20:34:04.688398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.346 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.346 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:48.346 20:34:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:48.346 I/O targets: 00:06:48.346 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:48.346 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:48.346 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:48.346 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.346 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.346 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.346 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:48.346 00:06:48.346 00:06:48.346 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.346 http://cunit.sourceforge.net/ 00:06:48.346 00:06:48.346 00:06:48.346 Suite: bdevio tests on: Nvme3n1 00:06:48.346 Test: blockdev write read block ...passed 00:06:48.346 Test: blockdev write zeroes read block ...passed 00:06:48.346 Test: blockdev write zeroes read no split ...passed 00:06:48.346 Test: blockdev write zeroes read split ...passed 00:06:48.346 Test: blockdev write zeroes read split partial ...passed 00:06:48.346 Test: blockdev reset ...[2024-12-06 20:34:05.394985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:48.346 [2024-12-06 20:34:05.397925] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:48.346 passed 00:06:48.346 Test: blockdev write read 8 blocks ...passed 00:06:48.346 Test: blockdev write read size > 128k ...passed 00:06:48.346 Test: blockdev write read invalid size ...passed 00:06:48.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.346 Test: blockdev write read max offset ...passed 00:06:48.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.346 Test: blockdev writev readv 8 blocks ...passed 00:06:48.346 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.346 Test: blockdev writev readv block ...passed 00:06:48.346 Test: blockdev writev readv size > 128k ...passed 00:06:48.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.346 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.404156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf804000 len:0x1000 00:06:48.346 [2024-12-06 20:34:05.404210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.346 passed 00:06:48.346 Test: blockdev nvme passthru rw ...passed 00:06:48.346 Test: blockdev nvme passthru vendor specific ...[2024-12-06 20:34:05.404938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.346 passed 00:06:48.346 Test: blockdev nvme admin passthru ...[2024-12-06 20:34:05.404970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.346 passed 00:06:48.346 Test: blockdev copy ...passed 00:06:48.346 Suite: bdevio tests on: Nvme2n3 00:06:48.346 Test: blockdev write read block ...passed 00:06:48.346 Test: blockdev write zeroes read block ...passed 00:06:48.346 Test: blockdev write zeroes read no split ...passed 00:06:48.346 Test: blockdev write zeroes read split ...passed 00:06:48.346 Test: blockdev write zeroes read split partial ...passed 00:06:48.346 Test: blockdev reset ...[2024-12-06 20:34:05.446389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:48.346 [2024-12-06 20:34:05.449529] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:48.346 passed 00:06:48.346 Test: blockdev write read 8 blocks ...passed 00:06:48.346 Test: blockdev write read size > 128k ...passed 00:06:48.346 Test: blockdev write read invalid size ...passed 00:06:48.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.346 Test: blockdev write read max offset ...passed 00:06:48.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.346 Test: blockdev writev readv 8 blocks ...passed 00:06:48.346 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.346 Test: blockdev writev readv block ...passed 00:06:48.346 Test: blockdev writev readv size > 128k ...passed 00:06:48.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.347 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.455698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf802000 len:0x1000 00:06:48.347 [2024-12-06 20:34:05.455742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.347 passed 00:06:48.347 Test: blockdev nvme passthru rw ...passed 00:06:48.347 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.347 Test: blockdev nvme admin passthru ...[2024-12-06 20:34:05.456297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.347 [2024-12-06 20:34:05.456324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.347 passed 00:06:48.347 Test: blockdev copy ...passed 00:06:48.347 Suite: bdevio tests on: Nvme2n2 00:06:48.347 Test: blockdev write read block ...passed 00:06:48.347 Test: blockdev write zeroes read block ...passed 00:06:48.347 Test: blockdev write zeroes read no split ...passed 00:06:48.605 Test: blockdev write zeroes read split ...passed 00:06:48.605 Test: blockdev write zeroes read split partial ...passed 00:06:48.605 Test: blockdev reset ...[2024-12-06 20:34:05.497652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:48.605 [2024-12-06 20:34:05.501999] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:48.605 passed 00:06:48.605 Test: blockdev write read 8 blocks ...passed 00:06:48.605 Test: blockdev write read size > 128k ...passed 00:06:48.605 Test: blockdev write read invalid size ...passed 00:06:48.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.605 Test: blockdev write read max offset ...passed 00:06:48.605 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.605 Test: blockdev writev readv 8 blocks ...passed 00:06:48.605 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.605 Test: blockdev writev readv block ...passed 00:06:48.605 Test: blockdev writev readv size > 128k ...passed 00:06:48.605 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.605 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.507780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba438000 len:0x1000 00:06:48.605 [2024-12-06 20:34:05.507821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.605 passed 00:06:48.605 Test: blockdev nvme passthru rw ...passed 00:06:48.605 Test: blockdev nvme passthru vendor specific ...[2024-12-06 20:34:05.508416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.605 passed 00:06:48.605 Test: blockdev nvme admin passthru ...[2024-12-06 20:34:05.508443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.605 passed 00:06:48.605 Test: blockdev copy ...passed 00:06:48.605 Suite: bdevio tests on: Nvme2n1 00:06:48.605 Test: blockdev write read block ...passed 00:06:48.605 Test: blockdev write zeroes read block ...passed 00:06:48.605 Test: blockdev write zeroes read no split ...passed 00:06:48.605 Test: blockdev write zeroes read split ...passed 00:06:48.605 Test: blockdev write zeroes read split partial ...passed 00:06:48.605 Test: blockdev reset ...[2024-12-06 20:34:05.561989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:48.605 [2024-12-06 20:34:05.565078] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:48.605 passed 00:06:48.605 Test: blockdev write read 8 blocks ...passed 00:06:48.605 Test: blockdev write read size > 128k ...passed 00:06:48.605 Test: blockdev write read invalid size ...passed 00:06:48.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.606 Test: blockdev write read max offset ...passed 00:06:48.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.606 Test: blockdev writev readv 8 blocks ...passed 00:06:48.606 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.606 Test: blockdev writev readv block ...passed 00:06:48.606 Test: blockdev writev readv size > 128k ...passed 00:06:48.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.606 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.571688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba434000 len:0x1000 00:06:48.606 [2024-12-06 20:34:05.571730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.606 passed 00:06:48.606 Test: blockdev nvme passthru rw ...passed 00:06:48.606 Test: blockdev nvme passthru vendor specific ...[2024-12-06 20:34:05.572422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.606 [2024-12-06 20:34:05.572447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.606 passed 00:06:48.606 Test: blockdev nvme admin passthru ...passed 00:06:48.606 Test: blockdev copy ...passed 00:06:48.606 Suite: bdevio tests on: Nvme1n1p2 00:06:48.606 Test: blockdev write read block ...passed 00:06:48.606 Test: blockdev write zeroes read block ...passed 00:06:48.606 Test: blockdev write zeroes read no split ...passed 00:06:48.606 Test: blockdev write zeroes read split ...passed 00:06:48.606 Test: blockdev write zeroes read split partial ...passed 00:06:48.606 Test: blockdev reset ...[2024-12-06 20:34:05.626290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:48.606 [2024-12-06 20:34:05.629064] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:48.606 passed 00:06:48.606 Test: blockdev write read 8 blocks ...passed 00:06:48.606 Test: blockdev write read size > 128k ...passed 00:06:48.606 Test: blockdev write read invalid size ...passed 00:06:48.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.606 Test: blockdev write read max offset ...passed 00:06:48.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.606 Test: blockdev writev readv 8 blocks ...passed 00:06:48.606 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.606 Test: blockdev writev readv block ...passed 00:06:48.606 Test: blockdev writev readv size > 128k ...passed 00:06:48.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.606 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.635373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ba430000 len:0x1000 00:06:48.606 [2024-12-06 20:34:05.635414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.606 passed 00:06:48.606 Test: blockdev nvme passthru rw ...passed 00:06:48.606 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.606 Test: blockdev nvme admin passthru ...passed 00:06:48.606 Test: blockdev copy ...passed 00:06:48.606 Suite: bdevio tests on: Nvme1n1p1 00:06:48.606 Test: blockdev write read block ...passed 00:06:48.606 Test: blockdev write zeroes read block ...passed 00:06:48.606 Test: blockdev write zeroes read no split ...passed 00:06:48.606 Test: blockdev write zeroes read split ...passed 00:06:48.606 Test: blockdev write zeroes read split partial ...passed 00:06:48.606 Test: blockdev reset ...[2024-12-06 20:34:05.674782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:48.606 [2024-12-06 20:34:05.677430] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:48.606 passed 00:06:48.606 Test: blockdev write read 8 blocks ...passed 00:06:48.606 Test: blockdev write read size > 128k ...passed 00:06:48.606 Test: blockdev write read invalid size ...passed 00:06:48.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.606 Test: blockdev write read max offset ...passed 00:06:48.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.606 Test: blockdev writev readv 8 blocks ...passed 00:06:48.606 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.606 Test: blockdev writev readv block ...passed 00:06:48.606 Test: blockdev writev readv size > 128k ...passed 00:06:48.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.606 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.683258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c020e000 len:0x1000 00:06:48.606 [2024-12-06 20:34:05.683298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.606 passed 00:06:48.606 Test: blockdev nvme passthru rw ...passed 00:06:48.606 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.606 Test: blockdev nvme admin passthru ...passed 00:06:48.606 Test: blockdev copy ...passed 00:06:48.606 Suite: bdevio tests on: Nvme0n1 00:06:48.606 Test: blockdev write read block ...passed 00:06:48.606 Test: blockdev write zeroes read block ...passed 00:06:48.606 Test: blockdev write zeroes read no split ...passed 00:06:48.606 Test: blockdev write zeroes read split ...passed 00:06:48.606 Test: blockdev write zeroes read split partial ...passed 00:06:48.606 Test: blockdev reset ...[2024-12-06 20:34:05.724098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:48.606 [2024-12-06 20:34:05.726688] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:48.606 passed 00:06:48.606 Test: blockdev write read 8 blocks ...passed 00:06:48.606 Test: blockdev write read size > 128k ...passed 00:06:48.606 Test: blockdev write read invalid size ...passed 00:06:48.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.606 Test: blockdev write read max offset ...passed 00:06:48.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.606 Test: blockdev writev readv 8 blocks ...passed 00:06:48.606 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.606 Test: blockdev writev readv block ...passed 00:06:48.606 Test: blockdev writev readv size > 128k ...passed 00:06:48.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.606 Test: blockdev comparev and writev ...[2024-12-06 20:34:05.734963] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:48.606 separate metadata which is not supported yet. 00:06:48.606 passed 00:06:48.606 Test: blockdev nvme passthru rw ...passed 00:06:48.606 Test: blockdev nvme passthru vendor specific ...[2024-12-06 20:34:05.735635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:48.606 [2024-12-06 20:34:05.735674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:48.606 passed 00:06:48.865 Test: blockdev nvme admin passthru ...passed 00:06:48.865 Test: blockdev copy ...passed 00:06:48.865 00:06:48.865 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.865 suites 7 7 n/a 0 0 00:06:48.865 tests 161 161 161 0 0 00:06:48.865 asserts 1025 1025 1025 0 n/a 00:06:48.865 00:06:48.865 Elapsed time = 1.035 seconds 00:06:48.865 0 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61373 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61373 ']' 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61373 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61373 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.865 killing process with pid 61373 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61373' 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61373 00:06:48.865 20:34:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61373 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:49.468 00:06:49.468 real 0m2.114s 00:06:49.468 user 0m5.376s 00:06:49.468 sys 0m0.286s 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:49.468 ************************************ 00:06:49.468 END TEST bdev_bounds 00:06:49.468 ************************************ 00:06:49.468 20:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:49.468 20:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:49.468 20:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.468 20:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.468 ************************************ 00:06:49.468 START TEST bdev_nbd 00:06:49.468 ************************************ 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:49.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61427 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61427 /var/tmp/spdk-nbd.sock 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61427 ']' 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:49.468 20:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:49.468 [2024-12-06 20:34:06.576624] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:06:49.468 [2024-12-06 20:34:06.576743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.727 [2024-12-06 20:34:06.742366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.727 [2024-12-06 20:34:06.848549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:50.359 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.360 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.617 1+0 records in 00:06:50.617 1+0 records out 00:06:50.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445312 s, 9.2 MB/s 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.617 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.877 1+0 records in 00:06:50.877 1+0 records out 00:06:50.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524215 s, 7.8 MB/s 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.877 20:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.137 1+0 records in 00:06:51.137 1+0 records out 00:06:51.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443643 s, 9.2 MB/s 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.137 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.396 1+0 records in 00:06:51.396 1+0 records out 00:06:51.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047628 s, 8.6 MB/s 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.396 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.655 1+0 records in 00:06:51.655 1+0 records out 00:06:51.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571383 s, 7.2 MB/s 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.655 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.914 1+0 records in 00:06:51.914 1+0 records out 00:06:51.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389173 s, 10.5 MB/s 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.914 20:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:52.171 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.172 1+0 records in 00:06:52.172 1+0 records out 00:06:52.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448933 s, 9.1 MB/s 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd0", 00:06:52.172 "bdev_name": "Nvme0n1" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd1", 00:06:52.172 "bdev_name": "Nvme1n1p1" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd2", 00:06:52.172 "bdev_name": "Nvme1n1p2" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd3", 00:06:52.172 "bdev_name": "Nvme2n1" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd4", 00:06:52.172 "bdev_name": "Nvme2n2" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd5", 00:06:52.172 "bdev_name": "Nvme2n3" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd6", 00:06:52.172 "bdev_name": "Nvme3n1" 00:06:52.172 } 00:06:52.172 ]' 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd0", 00:06:52.172 "bdev_name": "Nvme0n1" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd1", 00:06:52.172 "bdev_name": "Nvme1n1p1" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd2", 00:06:52.172 "bdev_name": "Nvme1n1p2" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd3", 00:06:52.172 "bdev_name": "Nvme2n1" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd4", 00:06:52.172 "bdev_name": "Nvme2n2" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd5", 00:06:52.172 "bdev_name": "Nvme2n3" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "nbd_device": "/dev/nbd6", 00:06:52.172 "bdev_name": "Nvme3n1" 00:06:52.172 } 00:06:52.172 ]' 00:06:52.172 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.430 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.688 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.945 20:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.202 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.460 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:53.718 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.976 20:34:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.976 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.976 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.976 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.976 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:54.235 /dev/nbd0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.235 1+0 records in 00:06:54.235 1+0 records out 00:06:54.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384884 s, 10.6 MB/s 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.235 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:54.493 /dev/nbd1 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.493 1+0 records in 00:06:54.493 1+0 records out 00:06:54.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424824 s, 9.6 MB/s 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.493 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:54.752 /dev/nbd10 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.752 1+0 records in 00:06:54.752 1+0 records out 00:06:54.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418215 s, 9.8 MB/s 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.752 20:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:55.010 /dev/nbd11 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.010 1+0 records in 00:06:55.010 1+0 records out 00:06:55.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546512 s, 7.5 MB/s 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.010 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:55.268 /dev/nbd12 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.268 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.268 1+0 records in 00:06:55.268 1+0 records out 00:06:55.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487367 s, 8.4 MB/s 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.269 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:55.526 /dev/nbd13 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.526 1+0 records in 00:06:55.526 1+0 records out 00:06:55.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439496 s, 9.3 MB/s 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.526 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:55.785 /dev/nbd14 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.785 1+0 records in 00:06:55.785 1+0 records out 00:06:55.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409082 s, 10.0 MB/s 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.785 20:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd0", 00:06:56.043 "bdev_name": "Nvme0n1" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd1", 00:06:56.043 "bdev_name": "Nvme1n1p1" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd10", 00:06:56.043 "bdev_name": "Nvme1n1p2" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd11", 00:06:56.043 "bdev_name": "Nvme2n1" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd12", 00:06:56.043 "bdev_name": "Nvme2n2" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd13", 00:06:56.043 "bdev_name": "Nvme2n3" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd14", 00:06:56.043 "bdev_name": "Nvme3n1" 00:06:56.043 } 00:06:56.043 ]' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd0", 00:06:56.043 "bdev_name": "Nvme0n1" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd1", 00:06:56.043 "bdev_name": "Nvme1n1p1" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd10", 00:06:56.043 "bdev_name": "Nvme1n1p2" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd11", 00:06:56.043 "bdev_name": "Nvme2n1" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd12", 00:06:56.043 "bdev_name": "Nvme2n2" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd13", 00:06:56.043 "bdev_name": "Nvme2n3" 00:06:56.043 }, 00:06:56.043 { 00:06:56.043 "nbd_device": "/dev/nbd14", 00:06:56.043 "bdev_name": "Nvme3n1" 00:06:56.043 } 00:06:56.043 ]' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.043 /dev/nbd1 00:06:56.043 /dev/nbd10 00:06:56.043 /dev/nbd11 00:06:56.043 /dev/nbd12 00:06:56.043 /dev/nbd13 00:06:56.043 /dev/nbd14' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.043 /dev/nbd1 00:06:56.043 /dev/nbd10 00:06:56.043 /dev/nbd11 00:06:56.043 /dev/nbd12 00:06:56.043 /dev/nbd13 00:06:56.043 /dev/nbd14' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:56.043 256+0 records in 00:06:56.043 256+0 records out 00:06:56.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591521 s, 177 MB/s 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.043 256+0 records in 00:06:56.043 256+0 records out 00:06:56.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0722798 s, 14.5 MB/s 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.043 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.302 256+0 records in 00:06:56.302 256+0 records out 00:06:56.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0767756 s, 13.7 MB/s 00:06:56.302 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.302 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:56.302 256+0 records in 00:06:56.302 256+0 records out 00:06:56.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781889 s, 13.4 MB/s 00:06:56.302 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.302 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:56.302 256+0 records in 00:06:56.302 256+0 records out 00:06:56.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0797047 s, 13.2 MB/s 00:06:56.302 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.302 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:56.561 256+0 records in 00:06:56.561 256+0 records out 00:06:56.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745738 s, 14.1 MB/s 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:56.561 256+0 records in 00:06:56.561 256+0 records out 00:06:56.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0762291 s, 13.8 MB/s 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:56.561 256+0 records in 00:06:56.561 256+0 records out 00:06:56.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.075764 s, 13.8 MB/s 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.561 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.819 20:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.076 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.346 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.605 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.863 20:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.120 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.376 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.376 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.376 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.376 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.376 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.376 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:58.377 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:58.636 malloc_lvol_verify 00:06:58.636 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:58.894 3ad81b41-f1b9-46c1-8934-befa3447c97c 00:06:58.894 20:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:58.894 893f5626-9323-4772-bec7-41b56f6ae421 00:06:58.894 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:59.152 /dev/nbd0 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:59.152 mke2fs 1.47.0 (5-Feb-2023) 00:06:59.152 Discarding device blocks: 0/4096 done 00:06:59.152 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:59.152 00:06:59.152 Allocating group tables: 0/1 done 00:06:59.152 Writing inode tables: 0/1 done 00:06:59.152 Creating journal (1024 blocks): done 00:06:59.152 Writing superblocks and filesystem accounting information: 0/1 done 00:06:59.152 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.152 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61427 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61427 ']' 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61427 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61427 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.410 killing process with pid 61427 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61427' 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61427 00:06:59.410 20:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61427 00:06:59.975 20:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:59.975 00:06:59.975 real 0m10.580s 00:06:59.975 user 0m15.218s 00:06:59.975 sys 0m3.495s 00:06:59.975 20:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.975 20:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:59.975 ************************************ 00:06:59.975 END TEST bdev_nbd 00:06:59.975 ************************************ 00:07:00.238 20:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:00.238 20:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:00.238 20:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:00.238 skipping fio tests on NVMe due to multi-ns failures. 00:07:00.238 20:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:00.238 20:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:00.238 20:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.238 20:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:00.238 20:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.238 20:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.238 ************************************ 00:07:00.238 START TEST bdev_verify 00:07:00.238 ************************************ 00:07:00.238 20:34:17 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.238 [2024-12-06 20:34:17.188944] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:00.238 [2024-12-06 20:34:17.189069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61840 ] 00:07:00.238 [2024-12-06 20:34:17.343521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.497 [2024-12-06 20:34:17.429756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.497 [2024-12-06 20:34:17.429777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.063 Running I/O for 5 seconds... 00:07:03.418 22208.00 IOPS, 86.75 MiB/s [2024-12-06T20:34:21.480Z] 21664.00 IOPS, 84.62 MiB/s [2024-12-06T20:34:22.409Z] 22165.33 IOPS, 86.58 MiB/s [2024-12-06T20:34:23.354Z] 21936.00 IOPS, 85.69 MiB/s [2024-12-06T20:34:23.354Z] 22169.60 IOPS, 86.60 MiB/s 00:07:06.221 Latency(us) 00:07:06.221 [2024-12-06T20:34:23.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.221 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0xbd0bd 00:07:06.221 Nvme0n1 : 5.07 1527.62 5.97 0.00 0.00 83423.97 8166.79 91548.75 00:07:06.221 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:06.221 Nvme0n1 : 5.07 1578.65 6.17 0.00 0.00 80628.19 8872.57 85095.98 00:07:06.221 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0x4ff80 00:07:06.221 Nvme1n1p1 : 5.07 1527.06 5.97 0.00 0.00 83286.56 8418.86 90338.86 00:07:06.221 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:06.221 Nvme1n1p1 : 5.08 1586.85 6.20 0.00 0.00 80268.42 11090.71 78643.20 00:07:06.221 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0x4ff7f 00:07:06.221 Nvme1n1p2 : 5.07 1526.58 5.96 0.00 0.00 83180.02 8217.21 90338.86 00:07:06.221 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:06.221 Nvme1n1p2 : 5.08 1586.34 6.20 0.00 0.00 80138.34 11292.36 77836.60 00:07:06.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0x80000 00:07:06.221 Nvme2n1 : 5.08 1535.77 6.00 0.00 0.00 82768.27 8267.62 88322.36 00:07:06.221 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x80000 length 0x80000 00:07:06.221 Nvme2n1 : 5.09 1585.79 6.19 0.00 0.00 79995.03 12250.19 79046.50 00:07:06.221 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0x80000 00:07:06.221 Nvme2n2 : 5.09 1535.10 6.00 0.00 0.00 82622.05 9175.04 87515.77 00:07:06.221 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x80000 length 0x80000 00:07:06.221 Nvme2n2 : 5.09 1585.10 6.19 0.00 0.00 79849.03 13409.67 80256.39 00:07:06.221 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0x80000 00:07:06.221 Nvme2n3 : 5.09 1534.39 5.99 0.00 0.00 82491.72 10384.94 87919.06 00:07:06.221 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x80000 length 0x80000 00:07:06.221 Nvme2n3 : 5.09 1584.43 6.19 0.00 0.00 79678.76 13208.02 81869.59 00:07:06.221 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x0 length 0x20000 00:07:06.221 Nvme3n1 : 5.09 1533.70 5.99 0.00 0.00 82337.67 11494.01 91548.75 00:07:06.221 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.221 Verification LBA range: start 0x20000 length 0x20000 00:07:06.221 Nvme3n1 : 5.09 1583.75 6.19 0.00 0.00 79579.32 9729.58 83079.48 00:07:06.221 [2024-12-06T20:34:23.354Z] =================================================================================================================== 00:07:06.221 [2024-12-06T20:34:23.354Z] Total : 21811.14 85.20 0.00 0.00 81420.44 8166.79 91548.75 00:07:07.595 00:07:07.595 real 0m7.309s 00:07:07.595 user 0m13.709s 00:07:07.595 sys 0m0.230s 00:07:07.595 20:34:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.595 ************************************ 00:07:07.595 END TEST bdev_verify 00:07:07.595 ************************************ 00:07:07.595 20:34:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.595 20:34:24 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.595 20:34:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:07.595 20:34:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.595 20:34:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.595 ************************************ 00:07:07.595 START TEST bdev_verify_big_io 00:07:07.595 ************************************ 00:07:07.595 20:34:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.595 [2024-12-06 20:34:24.533157] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:07.595 [2024-12-06 20:34:24.533285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:07:07.595 [2024-12-06 20:34:24.695595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.854 [2024-12-06 20:34:24.798920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.854 [2024-12-06 20:34:24.798935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.436 Running I/O for 5 seconds... 00:07:12.931 737.00 IOPS, 46.06 MiB/s [2024-12-06T20:34:32.004Z] 1880.00 IOPS, 117.50 MiB/s [2024-12-06T20:34:32.004Z] 3015.67 IOPS, 188.48 MiB/s 00:07:14.871 Latency(us) 00:07:14.871 [2024-12-06T20:34:32.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.871 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.871 Verification LBA range: start 0x0 length 0xbd0b 00:07:14.871 Nvme0n1 : 5.73 128.47 8.03 0.00 0.00 949748.29 15728.64 1142141.24 00:07:14.871 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.871 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:14.871 Nvme0n1 : 5.80 99.34 6.21 0.00 0.00 1230425.40 11594.83 1342177.28 00:07:14.871 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.871 Verification LBA range: start 0x0 length 0x4ff8 00:07:14.871 Nvme1n1p1 : 5.73 128.97 8.06 0.00 0.00 918152.33 95985.03 980821.86 00:07:14.872 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:14.872 Nvme1n1p1 : 6.02 101.33 6.33 0.00 0.00 1155340.34 108083.99 1135688.47 00:07:14.872 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x0 length 0x4ff7 00:07:14.872 Nvme1n1p2 : 5.73 133.95 8.37 0.00 0.00 874061.19 107277.39 819502.47 00:07:14.872 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:14.872 Nvme1n1p2 : 5.92 99.68 6.23 0.00 0.00 1144380.57 113730.17 1361535.61 00:07:14.872 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x0 length 0x8000 00:07:14.872 Nvme2n1 : 5.83 135.51 8.47 0.00 0.00 836242.93 94775.14 980821.86 00:07:14.872 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x8000 length 0x8000 00:07:14.872 Nvme2n1 : 6.02 98.14 6.13 0.00 0.00 1127824.49 98001.53 2013265.92 00:07:14.872 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x0 length 0x8000 00:07:14.872 Nvme2n2 : 5.95 146.06 9.13 0.00 0.00 760345.11 32868.82 1000180.18 00:07:14.872 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x8000 length 0x8000 00:07:14.872 Nvme2n2 : 6.07 107.72 6.73 0.00 0.00 1001507.63 24500.38 2051982.57 00:07:14.872 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x0 length 0x8000 00:07:14.872 Nvme2n3 : 5.95 150.55 9.41 0.00 0.00 718781.83 31457.28 1013085.74 00:07:14.872 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x8000 length 0x8000 00:07:14.872 Nvme2n3 : 6.10 117.63 7.35 0.00 0.00 887204.59 12603.08 1819682.66 00:07:14.872 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x0 length 0x2000 00:07:14.872 Nvme3n1 : 6.03 169.81 10.61 0.00 0.00 620652.17 2155.13 1025991.29 00:07:14.872 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.872 Verification LBA range: start 0x2000 length 0x2000 00:07:14.872 Nvme3n1 : 6.17 157.43 9.84 0.00 0.00 648069.68 387.54 1858399.31 00:07:14.872 [2024-12-06T20:34:32.005Z] =================================================================================================================== 00:07:14.872 [2024-12-06T20:34:32.005Z] Total : 1774.59 110.91 0.00 0.00 885492.31 387.54 2051982.57 00:07:16.796 00:07:16.796 real 0m9.313s 00:07:16.796 user 0m17.719s 00:07:16.796 sys 0m0.244s 00:07:16.796 20:34:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.796 ************************************ 00:07:16.796 END TEST bdev_verify_big_io 00:07:16.796 ************************************ 00:07:16.796 20:34:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:16.796 20:34:33 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.796 20:34:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:16.796 20:34:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.796 20:34:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.796 ************************************ 00:07:16.796 START TEST bdev_write_zeroes 00:07:16.796 ************************************ 00:07:16.796 20:34:33 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.796 [2024-12-06 20:34:33.891263] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:16.796 [2024-12-06 20:34:33.891391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62047 ] 00:07:17.054 [2024-12-06 20:34:34.046631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.054 [2024-12-06 20:34:34.129433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.621 Running I/O for 1 seconds... 00:07:18.816 71232.00 IOPS, 278.25 MiB/s 00:07:18.816 Latency(us) 00:07:18.816 [2024-12-06T20:34:35.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.816 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme0n1 : 1.02 10158.40 39.68 0.00 0.00 12575.26 9225.45 22887.19 00:07:18.816 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme1n1p1 : 1.02 10148.19 39.64 0.00 0.00 12572.44 9376.69 22383.06 00:07:18.816 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme1n1p2 : 1.02 10137.50 39.60 0.00 0.00 12554.90 9326.28 21576.47 00:07:18.816 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme2n1 : 1.02 10127.53 39.56 0.00 0.00 12539.85 9275.86 20971.52 00:07:18.816 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme2n2 : 1.02 10118.58 39.53 0.00 0.00 12524.97 9376.69 20467.40 00:07:18.816 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme2n3 : 1.03 10109.18 39.49 0.00 0.00 12514.21 8872.57 21878.94 00:07:18.816 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.816 Nvme3n1 : 1.03 10100.22 39.45 0.00 0.00 12501.33 8015.56 22988.01 00:07:18.816 [2024-12-06T20:34:35.949Z] =================================================================================================================== 00:07:18.816 [2024-12-06T20:34:35.949Z] Total : 70899.61 276.95 0.00 0.00 12540.42 8015.56 22988.01 00:07:19.385 00:07:19.385 real 0m2.484s 00:07:19.385 user 0m2.203s 00:07:19.385 sys 0m0.168s 00:07:19.385 20:34:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.385 20:34:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:19.385 ************************************ 00:07:19.385 END TEST bdev_write_zeroes 00:07:19.385 ************************************ 00:07:19.385 20:34:36 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.385 20:34:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:19.385 20:34:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.385 20:34:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.385 ************************************ 00:07:19.385 START TEST bdev_json_nonenclosed 00:07:19.385 ************************************ 00:07:19.385 20:34:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.385 [2024-12-06 20:34:36.416220] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:19.385 [2024-12-06 20:34:36.416347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62099 ] 00:07:19.644 [2024-12-06 20:34:36.573174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.644 [2024-12-06 20:34:36.678266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.644 [2024-12-06 20:34:36.678351] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:19.644 [2024-12-06 20:34:36.678368] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:19.644 [2024-12-06 20:34:36.678377] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.904 00:07:19.904 real 0m0.504s 00:07:19.904 user 0m0.310s 00:07:19.904 sys 0m0.090s 00:07:19.904 20:34:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.904 ************************************ 00:07:19.904 END TEST bdev_json_nonenclosed 00:07:19.904 20:34:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:19.904 ************************************ 00:07:19.904 20:34:36 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.904 20:34:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:19.904 20:34:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.904 20:34:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.904 ************************************ 00:07:19.904 START TEST bdev_json_nonarray 00:07:19.904 ************************************ 00:07:19.904 20:34:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.904 [2024-12-06 20:34:36.955500] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:19.904 [2024-12-06 20:34:36.955623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62120 ] 00:07:20.163 [2024-12-06 20:34:37.115367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.163 [2024-12-06 20:34:37.215779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.163 [2024-12-06 20:34:37.215865] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:20.163 [2024-12-06 20:34:37.215881] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.163 [2024-12-06 20:34:37.215901] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.421 ************************************ 00:07:20.421 END TEST bdev_json_nonarray 00:07:20.421 ************************************ 00:07:20.421 00:07:20.421 real 0m0.506s 00:07:20.421 user 0m0.308s 00:07:20.421 sys 0m0.092s 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:20.421 20:34:37 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:20.421 20:34:37 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:20.421 20:34:37 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:20.421 20:34:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.421 20:34:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.421 20:34:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.421 ************************************ 00:07:20.421 START TEST bdev_gpt_uuid 00:07:20.421 ************************************ 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:20.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62151 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62151 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62151 ']' 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.421 20:34:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:20.421 [2024-12-06 20:34:37.513586] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:20.421 [2024-12-06 20:34:37.513720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62151 ] 00:07:20.679 [2024-12-06 20:34:37.678576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.978 [2024-12-06 20:34:37.822911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.544 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.544 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:21.544 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.544 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.544 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 Some configs were skipped because the RPC state that can call them passed over. 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.802 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:21.802 { 00:07:21.802 "name": "Nvme1n1p1", 00:07:21.802 "aliases": [ 00:07:21.802 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:21.802 ], 00:07:21.802 "product_name": "GPT Disk", 00:07:21.802 "block_size": 4096, 00:07:21.802 "num_blocks": 655104, 00:07:21.802 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:21.802 "assigned_rate_limits": { 00:07:21.802 "rw_ios_per_sec": 0, 00:07:21.802 "rw_mbytes_per_sec": 0, 00:07:21.803 "r_mbytes_per_sec": 0, 00:07:21.803 "w_mbytes_per_sec": 0 00:07:21.803 }, 00:07:21.803 "claimed": false, 00:07:21.803 "zoned": false, 00:07:21.803 "supported_io_types": { 00:07:21.803 "read": true, 00:07:21.803 "write": true, 00:07:21.803 "unmap": true, 00:07:21.803 "flush": true, 00:07:21.803 "reset": true, 00:07:21.803 "nvme_admin": false, 00:07:21.803 "nvme_io": false, 00:07:21.803 "nvme_io_md": false, 00:07:21.803 "write_zeroes": true, 00:07:21.803 "zcopy": false, 00:07:21.803 "get_zone_info": false, 00:07:21.803 "zone_management": false, 00:07:21.803 "zone_append": false, 00:07:21.803 "compare": true, 00:07:21.803 "compare_and_write": false, 00:07:21.803 "abort": true, 00:07:21.803 "seek_hole": false, 00:07:21.803 "seek_data": false, 00:07:21.803 "copy": true, 00:07:21.803 "nvme_iov_md": false 00:07:21.803 }, 00:07:21.803 "driver_specific": { 00:07:21.803 "gpt": { 00:07:21.803 "base_bdev": "Nvme1n1", 00:07:21.803 "offset_blocks": 256, 00:07:21.803 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:21.803 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:21.803 "partition_name": "SPDK_TEST_first" 00:07:21.803 } 00:07:21.803 } 00:07:21.803 } 00:07:21.803 ]' 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:21.803 { 00:07:21.803 "name": "Nvme1n1p2", 00:07:21.803 "aliases": [ 00:07:21.803 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:21.803 ], 00:07:21.803 "product_name": "GPT Disk", 00:07:21.803 "block_size": 4096, 00:07:21.803 "num_blocks": 655103, 00:07:21.803 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:21.803 "assigned_rate_limits": { 00:07:21.803 "rw_ios_per_sec": 0, 00:07:21.803 "rw_mbytes_per_sec": 0, 00:07:21.803 "r_mbytes_per_sec": 0, 00:07:21.803 "w_mbytes_per_sec": 0 00:07:21.803 }, 00:07:21.803 "claimed": false, 00:07:21.803 "zoned": false, 00:07:21.803 "supported_io_types": { 00:07:21.803 "read": true, 00:07:21.803 "write": true, 00:07:21.803 "unmap": true, 00:07:21.803 "flush": true, 00:07:21.803 "reset": true, 00:07:21.803 "nvme_admin": false, 00:07:21.803 "nvme_io": false, 00:07:21.803 "nvme_io_md": false, 00:07:21.803 "write_zeroes": true, 00:07:21.803 "zcopy": false, 00:07:21.803 "get_zone_info": false, 00:07:21.803 "zone_management": false, 00:07:21.803 "zone_append": false, 00:07:21.803 "compare": true, 00:07:21.803 "compare_and_write": false, 00:07:21.803 "abort": true, 00:07:21.803 "seek_hole": false, 00:07:21.803 "seek_data": false, 00:07:21.803 "copy": true, 00:07:21.803 "nvme_iov_md": false 00:07:21.803 }, 00:07:21.803 "driver_specific": { 00:07:21.803 "gpt": { 00:07:21.803 "base_bdev": "Nvme1n1", 00:07:21.803 "offset_blocks": 655360, 00:07:21.803 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:21.803 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:21.803 "partition_name": "SPDK_TEST_second" 00:07:21.803 } 00:07:21.803 } 00:07:21.803 } 00:07:21.803 ]' 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:21.803 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62151 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62151 ']' 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62151 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.061 20:34:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62151 00:07:22.061 killing process with pid 62151 00:07:22.061 20:34:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.061 20:34:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.061 20:34:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62151' 00:07:22.061 20:34:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62151 00:07:22.061 20:34:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62151 00:07:23.432 ************************************ 00:07:23.432 END TEST bdev_gpt_uuid 00:07:23.432 ************************************ 00:07:23.432 00:07:23.432 real 0m3.074s 00:07:23.432 user 0m3.200s 00:07:23.432 sys 0m0.383s 00:07:23.432 20:34:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.432 20:34:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:23.433 20:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:23.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:23.945 Waiting for block devices as requested 00:07:23.945 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.202 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.202 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.460 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:29.460 20:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:29.460 20:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:29.460 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:29.460 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:29.460 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:29.460 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:29.460 20:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:29.460 00:07:29.460 real 0m55.731s 00:07:29.460 user 1m12.070s 00:07:29.460 sys 0m7.635s 00:07:29.460 20:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.460 20:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 ************************************ 00:07:29.460 END TEST blockdev_nvme_gpt 00:07:29.460 ************************************ 00:07:29.717 20:34:46 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:29.717 20:34:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.717 20:34:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.717 20:34:46 -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 ************************************ 00:07:29.717 START TEST nvme 00:07:29.717 ************************************ 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:29.717 * Looking for test storage... 00:07:29.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.717 20:34:46 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.717 20:34:46 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.717 20:34:46 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.717 20:34:46 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.717 20:34:46 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.717 20:34:46 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:29.717 20:34:46 nvme -- scripts/common.sh@345 -- # : 1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.717 20:34:46 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.717 20:34:46 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@353 -- # local d=1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.717 20:34:46 nvme -- scripts/common.sh@355 -- # echo 1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.717 20:34:46 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@353 -- # local d=2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.717 20:34:46 nvme -- scripts/common.sh@355 -- # echo 2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.717 20:34:46 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.717 20:34:46 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.717 20:34:46 nvme -- scripts/common.sh@368 -- # return 0 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.717 --rc genhtml_branch_coverage=1 00:07:29.717 --rc genhtml_function_coverage=1 00:07:29.717 --rc genhtml_legend=1 00:07:29.717 --rc geninfo_all_blocks=1 00:07:29.717 --rc geninfo_unexecuted_blocks=1 00:07:29.717 00:07:29.717 ' 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.717 --rc genhtml_branch_coverage=1 00:07:29.717 --rc genhtml_function_coverage=1 00:07:29.717 --rc genhtml_legend=1 00:07:29.717 --rc geninfo_all_blocks=1 00:07:29.717 --rc geninfo_unexecuted_blocks=1 00:07:29.717 00:07:29.717 ' 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.717 --rc genhtml_branch_coverage=1 00:07:29.717 --rc genhtml_function_coverage=1 00:07:29.717 --rc genhtml_legend=1 00:07:29.717 --rc geninfo_all_blocks=1 00:07:29.717 --rc geninfo_unexecuted_blocks=1 00:07:29.717 00:07:29.717 ' 00:07:29.717 20:34:46 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.717 --rc genhtml_branch_coverage=1 00:07:29.717 --rc genhtml_function_coverage=1 00:07:29.717 --rc genhtml_legend=1 00:07:29.717 --rc geninfo_all_blocks=1 00:07:29.717 --rc geninfo_unexecuted_blocks=1 00:07:29.717 00:07:29.717 ' 00:07:29.717 20:34:46 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:30.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.537 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.537 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.537 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.537 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.795 20:34:47 nvme -- nvme/nvme.sh@79 -- # uname 00:07:30.795 20:34:47 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:30.795 20:34:47 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:30.795 20:34:47 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1075 -- # stubpid=62780 00:07:30.795 Waiting for stub to ready for secondary processes... 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62780 ]] 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:30.795 20:34:47 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:30.795 [2024-12-06 20:34:47.751553] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:07:30.795 [2024-12-06 20:34:47.751725] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:31.729 [2024-12-06 20:34:48.528494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.729 [2024-12-06 20:34:48.622492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.729 [2024-12-06 20:34:48.622873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.729 [2024-12-06 20:34:48.622914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.729 [2024-12-06 20:34:48.636674] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:31.729 [2024-12-06 20:34:48.636715] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.729 [2024-12-06 20:34:48.645348] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:31.729 [2024-12-06 20:34:48.645485] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:31.729 [2024-12-06 20:34:48.646913] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.729 [2024-12-06 20:34:48.647045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:31.729 [2024-12-06 20:34:48.647087] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:31.729 [2024-12-06 20:34:48.648493] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.729 [2024-12-06 20:34:48.648614] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:31.729 [2024-12-06 20:34:48.648669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:31.729 [2024-12-06 20:34:48.650604] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:31.729 [2024-12-06 20:34:48.650801] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:31.729 [2024-12-06 20:34:48.650855] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:31.729 [2024-12-06 20:34:48.650906] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:31.729 [2024-12-06 20:34:48.650946] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:31.729 20:34:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:31.729 done. 00:07:31.729 20:34:48 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:31.729 20:34:48 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:31.729 20:34:48 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:31.729 20:34:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.729 20:34:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.729 ************************************ 00:07:31.729 START TEST nvme_reset 00:07:31.729 ************************************ 00:07:31.729 20:34:48 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:31.987 Initializing NVMe Controllers 00:07:31.987 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:31.987 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:31.987 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:31.987 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:31.987 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:31.987 00:07:31.987 real 0m0.190s 00:07:31.987 user 0m0.076s 00:07:31.987 sys 0m0.083s 00:07:31.987 20:34:48 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.987 ************************************ 00:07:31.987 END TEST nvme_reset 00:07:31.987 20:34:48 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:31.987 ************************************ 00:07:31.987 20:34:48 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:31.987 20:34:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.987 20:34:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.987 20:34:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.987 ************************************ 00:07:31.987 START TEST nvme_identify 00:07:31.987 ************************************ 00:07:31.987 20:34:48 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:31.987 20:34:48 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:31.987 20:34:48 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:31.987 20:34:48 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:31.987 20:34:48 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:31.987 20:34:48 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:31.987 20:34:48 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:31.987 20:34:48 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:31.987 20:34:48 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.987 20:34:48 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:31.987 20:34:49 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:31.987 20:34:49 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:31.987 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:32.250 [2024-12-06 20:34:49.188867] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62802 terminated unexpected 00:07:32.250 ===================================================== 00:07:32.250 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.250 ===================================================== 00:07:32.250 Controller Capabilities/Features 00:07:32.250 ================================ 00:07:32.250 Vendor ID: 1b36 00:07:32.250 Subsystem Vendor ID: 1af4 00:07:32.250 Serial Number: 12340 00:07:32.250 Model Number: QEMU NVMe Ctrl 00:07:32.250 Firmware Version: 8.0.0 00:07:32.250 Recommended Arb Burst: 6 00:07:32.250 IEEE OUI Identifier: 00 54 52 00:07:32.250 Multi-path I/O 00:07:32.250 May have multiple subsystem ports: No 00:07:32.250 May have multiple controllers: No 00:07:32.250 Associated with SR-IOV VF: No 00:07:32.250 Max Data Transfer Size: 524288 00:07:32.250 Max Number of Namespaces: 256 00:07:32.250 Max Number of I/O Queues: 64 00:07:32.250 NVMe Specification Version (VS): 1.4 00:07:32.250 NVMe Specification Version (Identify): 1.4 00:07:32.250 Maximum Queue Entries: 2048 00:07:32.250 Contiguous Queues Required: Yes 00:07:32.250 Arbitration Mechanisms Supported 00:07:32.250 Weighted Round Robin: Not Supported 00:07:32.250 Vendor Specific: Not Supported 00:07:32.250 Reset Timeout: 7500 ms 00:07:32.250 Doorbell Stride: 4 bytes 00:07:32.250 NVM Subsystem Reset: Not Supported 00:07:32.250 Command Sets Supported 00:07:32.250 NVM Command Set: Supported 00:07:32.250 Boot Partition: Not Supported 00:07:32.250 Memory Page Size Minimum: 4096 bytes 00:07:32.250 Memory Page Size Maximum: 65536 bytes 00:07:32.250 Persistent Memory Region: Not Supported 00:07:32.250 Optional Asynchronous Events Supported 00:07:32.250 Namespace Attribute Notices: Supported 00:07:32.250 Firmware Activation Notices: Not Supported 00:07:32.250 ANA Change Notices: Not Supported 00:07:32.250 PLE Aggregate Log Change Notices: Not Supported 00:07:32.250 LBA Status Info Alert Notices: Not Supported 00:07:32.250 EGE Aggregate Log Change Notices: Not Supported 00:07:32.250 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.250 Zone Descriptor Change Notices: Not Supported 00:07:32.250 Discovery Log Change Notices: Not Supported 00:07:32.250 Controller Attributes 00:07:32.250 128-bit Host Identifier: Not Supported 00:07:32.250 Non-Operational Permissive Mode: Not Supported 00:07:32.250 NVM Sets: Not Supported 00:07:32.250 Read Recovery Levels: Not Supported 00:07:32.250 Endurance Groups: Not Supported 00:07:32.250 Predictable Latency Mode: Not Supported 00:07:32.250 Traffic Based Keep ALive: Not Supported 00:07:32.250 Namespace Granularity: Not Supported 00:07:32.250 SQ Associations: Not Supported 00:07:32.250 UUID List: Not Supported 00:07:32.250 Multi-Domain Subsystem: Not Supported 00:07:32.250 Fixed Capacity Management: Not Supported 00:07:32.250 Variable Capacity Management: Not Supported 00:07:32.250 Delete Endurance Group: Not Supported 00:07:32.250 Delete NVM Set: Not Supported 00:07:32.250 Extended LBA Formats Supported: Supported 00:07:32.250 Flexible Data Placement Supported: Not Supported 00:07:32.250 00:07:32.250 Controller Memory Buffer Support 00:07:32.250 ================================ 00:07:32.250 Supported: No 00:07:32.250 00:07:32.250 Persistent Memory Region Support 00:07:32.250 ================================ 00:07:32.250 Supported: No 00:07:32.250 00:07:32.250 Admin Command Set Attributes 00:07:32.250 ============================ 00:07:32.250 Security Send/Receive: Not Supported 00:07:32.250 Format NVM: Supported 00:07:32.250 Firmware Activate/Download: Not Supported 00:07:32.250 Namespace Management: Supported 00:07:32.250 Device Self-Test: Not Supported 00:07:32.250 Directives: Supported 00:07:32.250 NVMe-MI: Not Supported 00:07:32.250 Virtualization Management: Not Supported 00:07:32.250 Doorbell Buffer Config: Supported 00:07:32.250 Get LBA Status Capability: Not Supported 00:07:32.250 Command & Feature Lockdown Capability: Not Supported 00:07:32.250 Abort Command Limit: 4 00:07:32.250 Async Event Request Limit: 4 00:07:32.250 Number of Firmware Slots: N/A 00:07:32.250 Firmware Slot 1 Read-Only: N/A 00:07:32.250 Firmware Activation Without Reset: N/A 00:07:32.250 Multiple Update Detection Support: N/A 00:07:32.250 Firmware Update Granularity: No Information Provided 00:07:32.250 Per-Namespace SMART Log: Yes 00:07:32.250 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.250 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.250 Command Effects Log Page: Supported 00:07:32.250 Get Log Page Extended Data: Supported 00:07:32.250 Telemetry Log Pages: Not Supported 00:07:32.250 Persistent Event Log Pages: Not Supported 00:07:32.250 Supported Log Pages Log Page: May Support 00:07:32.250 Commands Supported & Effects Log Page: Not Supported 00:07:32.250 Feature Identifiers & Effects Log Page:May Support 00:07:32.250 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.250 Data Area 4 for Telemetry Log: Not Supported 00:07:32.250 Error Log Page Entries Supported: 1 00:07:32.250 Keep Alive: Not Supported 00:07:32.250 00:07:32.250 NVM Command Set Attributes 00:07:32.250 ========================== 00:07:32.250 Submission Queue Entry Size 00:07:32.250 Max: 64 00:07:32.250 Min: 64 00:07:32.250 Completion Queue Entry Size 00:07:32.250 Max: 16 00:07:32.250 Min: 16 00:07:32.250 Number of Namespaces: 256 00:07:32.250 Compare Command: Supported 00:07:32.250 Write Uncorrectable Command: Not Supported 00:07:32.250 Dataset Management Command: Supported 00:07:32.250 Write Zeroes Command: Supported 00:07:32.250 Set Features Save Field: Supported 00:07:32.250 Reservations: Not Supported 00:07:32.250 Timestamp: Supported 00:07:32.250 Copy: Supported 00:07:32.250 Volatile Write Cache: Present 00:07:32.250 Atomic Write Unit (Normal): 1 00:07:32.250 Atomic Write Unit (PFail): 1 00:07:32.250 Atomic Compare & Write Unit: 1 00:07:32.250 Fused Compare & Write: Not Supported 00:07:32.250 Scatter-Gather List 00:07:32.250 SGL Command Set: Supported 00:07:32.250 SGL Keyed: Not Supported 00:07:32.250 SGL Bit Bucket Descriptor: Not Supported 00:07:32.250 SGL Metadata Pointer: Not Supported 00:07:32.250 Oversized SGL: Not Supported 00:07:32.250 SGL Metadata Address: Not Supported 00:07:32.250 SGL Offset: Not Supported 00:07:32.250 Transport SGL Data Block: Not Supported 00:07:32.250 Replay Protected Memory Block: Not Supported 00:07:32.250 00:07:32.250 Firmware Slot Information 00:07:32.250 ========================= 00:07:32.250 Active slot: 1 00:07:32.250 Slot 1 Firmware Revision: 1.0 00:07:32.250 00:07:32.250 00:07:32.250 Commands Supported and Effects 00:07:32.250 ============================== 00:07:32.250 Admin Commands 00:07:32.250 -------------- 00:07:32.250 Delete I/O Submission Queue (00h): Supported 00:07:32.250 Create I/O Submission Queue (01h): Supported 00:07:32.250 Get Log Page (02h): Supported 00:07:32.250 Delete I/O Completion Queue (04h): Supported 00:07:32.250 Create I/O Completion Queue (05h): Supported 00:07:32.250 Identify (06h): Supported 00:07:32.250 Abort (08h): Supported 00:07:32.250 Set Features (09h): Supported 00:07:32.250 Get Features (0Ah): Supported 00:07:32.250 Asynchronous Event Request (0Ch): Supported 00:07:32.250 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.250 Directive Send (19h): Supported 00:07:32.250 Directive Receive (1Ah): Supported 00:07:32.251 Virtualization Management (1Ch): Supported 00:07:32.251 Doorbell Buffer Config (7Ch): Supported 00:07:32.251 Format NVM (80h): Supported LBA-Change 00:07:32.251 I/O Commands 00:07:32.251 ------------ 00:07:32.251 Flush (00h): Supported LBA-Change 00:07:32.251 Write (01h): Supported LBA-Change 00:07:32.251 Read (02h): Supported 00:07:32.251 Compare (05h): Supported 00:07:32.251 Write Zeroes (08h): Supported LBA-Change 00:07:32.251 Dataset Management (09h): Supported LBA-Change 00:07:32.251 Unknown (0Ch): Supported 00:07:32.251 Unknown (12h): Supported 00:07:32.251 Copy (19h): Supported LBA-Change 00:07:32.251 Unknown (1Dh): Supported LBA-Change 00:07:32.251 00:07:32.251 Error Log 00:07:32.251 ========= 00:07:32.251 00:07:32.251 Arbitration 00:07:32.251 =========== 00:07:32.251 Arbitration Burst: no limit 00:07:32.251 00:07:32.251 Power Management 00:07:32.251 ================ 00:07:32.251 Number of Power States: 1 00:07:32.251 Current Power State: Power State #0 00:07:32.251 Power State #0: 00:07:32.251 Max Power: 25.00 W 00:07:32.251 Non-Operational State: Operational 00:07:32.251 Entry Latency: 16 microseconds 00:07:32.251 Exit Latency: 4 microseconds 00:07:32.251 Relative Read Throughput: 0 00:07:32.251 Relative Read Latency: 0 00:07:32.251 Relative Write Throughput: 0 00:07:32.251 Relative Write Latency: 0 00:07:32.251 Idle Power[2024-12-06 20:34:49.190105] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62802 terminated unexpected 00:07:32.251 : Not Reported 00:07:32.251 Active Power: Not Reported 00:07:32.251 Non-Operational Permissive Mode: Not Supported 00:07:32.251 00:07:32.251 Health Information 00:07:32.251 ================== 00:07:32.251 Critical Warnings: 00:07:32.251 Available Spare Space: OK 00:07:32.251 Temperature: OK 00:07:32.251 Device Reliability: OK 00:07:32.251 Read Only: No 00:07:32.251 Volatile Memory Backup: OK 00:07:32.251 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.251 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.251 Available Spare: 0% 00:07:32.251 Available Spare Threshold: 0% 00:07:32.251 Life Percentage Used: 0% 00:07:32.251 Data Units Read: 665 00:07:32.251 Data Units Written: 593 00:07:32.251 Host Read Commands: 36531 00:07:32.251 Host Write Commands: 36317 00:07:32.251 Controller Busy Time: 0 minutes 00:07:32.251 Power Cycles: 0 00:07:32.251 Power On Hours: 0 hours 00:07:32.251 Unsafe Shutdowns: 0 00:07:32.251 Unrecoverable Media Errors: 0 00:07:32.251 Lifetime Error Log Entries: 0 00:07:32.251 Warning Temperature Time: 0 minutes 00:07:32.251 Critical Temperature Time: 0 minutes 00:07:32.251 00:07:32.251 Number of Queues 00:07:32.251 ================ 00:07:32.251 Number of I/O Submission Queues: 64 00:07:32.251 Number of I/O Completion Queues: 64 00:07:32.251 00:07:32.251 ZNS Specific Controller Data 00:07:32.251 ============================ 00:07:32.251 Zone Append Size Limit: 0 00:07:32.251 00:07:32.251 00:07:32.251 Active Namespaces 00:07:32.251 ================= 00:07:32.251 Namespace ID:1 00:07:32.251 Error Recovery Timeout: Unlimited 00:07:32.251 Command Set Identifier: NVM (00h) 00:07:32.251 Deallocate: Supported 00:07:32.251 Deallocated/Unwritten Error: Supported 00:07:32.251 Deallocated Read Value: All 0x00 00:07:32.251 Deallocate in Write Zeroes: Not Supported 00:07:32.251 Deallocated Guard Field: 0xFFFF 00:07:32.251 Flush: Supported 00:07:32.251 Reservation: Not Supported 00:07:32.251 Metadata Transferred as: Separate Metadata Buffer 00:07:32.251 Namespace Sharing Capabilities: Private 00:07:32.251 Size (in LBAs): 1548666 (5GiB) 00:07:32.251 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.251 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.251 Thin Provisioning: Not Supported 00:07:32.251 Per-NS Atomic Units: No 00:07:32.251 Maximum Single Source Range Length: 128 00:07:32.251 Maximum Copy Length: 128 00:07:32.251 Maximum Source Range Count: 128 00:07:32.251 NGUID/EUI64 Never Reused: No 00:07:32.251 Namespace Write Protected: No 00:07:32.251 Number of LBA Formats: 8 00:07:32.251 Current LBA Format: LBA Format #07 00:07:32.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.251 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.251 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.251 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.251 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.251 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.251 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.251 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.251 00:07:32.251 NVM Specific Namespace Data 00:07:32.251 =========================== 00:07:32.251 Logical Block Storage Tag Mask: 0 00:07:32.251 Protection Information Capabilities: 00:07:32.251 16b Guard Protection Information Storage Tag Support: No 00:07:32.251 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.251 Storage Tag Check Read Support: No 00:07:32.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.251 ===================================================== 00:07:32.251 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.251 ===================================================== 00:07:32.251 Controller Capabilities/Features 00:07:32.251 ================================ 00:07:32.251 Vendor ID: 1b36 00:07:32.251 Subsystem Vendor ID: 1af4 00:07:32.251 Serial Number: 12341 00:07:32.251 Model Number: QEMU NVMe Ctrl 00:07:32.251 Firmware Version: 8.0.0 00:07:32.251 Recommended Arb Burst: 6 00:07:32.251 IEEE OUI Identifier: 00 54 52 00:07:32.251 Multi-path I/O 00:07:32.251 May have multiple subsystem ports: No 00:07:32.251 May have multiple controllers: No 00:07:32.251 Associated with SR-IOV VF: No 00:07:32.251 Max Data Transfer Size: 524288 00:07:32.251 Max Number of Namespaces: 256 00:07:32.252 Max Number of I/O Queues: 64 00:07:32.252 NVMe Specification Version (VS): 1.4 00:07:32.252 NVMe Specification Version (Identify): 1.4 00:07:32.252 Maximum Queue Entries: 2048 00:07:32.252 Contiguous Queues Required: Yes 00:07:32.252 Arbitration Mechanisms Supported 00:07:32.252 Weighted Round Robin: Not Supported 00:07:32.252 Vendor Specific: Not Supported 00:07:32.252 Reset Timeout: 7500 ms 00:07:32.252 Doorbell Stride: 4 bytes 00:07:32.252 NVM Subsystem Reset: Not Supported 00:07:32.252 Command Sets Supported 00:07:32.252 NVM Command Set: Supported 00:07:32.252 Boot Partition: Not Supported 00:07:32.252 Memory Page Size Minimum: 4096 bytes 00:07:32.252 Memory Page Size Maximum: 65536 bytes 00:07:32.252 Persistent Memory Region: Not Supported 00:07:32.252 Optional Asynchronous Events Supported 00:07:32.252 Namespace Attribute Notices: Supported 00:07:32.252 Firmware Activation Notices: Not Supported 00:07:32.252 ANA Change Notices: Not Supported 00:07:32.252 PLE Aggregate Log Change Notices: Not Supported 00:07:32.252 LBA Status Info Alert Notices: Not Supported 00:07:32.252 EGE Aggregate Log Change Notices: Not Supported 00:07:32.252 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.252 Zone Descriptor Change Notices: Not Supported 00:07:32.252 Discovery Log Change Notices: Not Supported 00:07:32.252 Controller Attributes 00:07:32.252 128-bit Host Identifier: Not Supported 00:07:32.252 Non-Operational Permissive Mode: Not Supported 00:07:32.252 NVM Sets: Not Supported 00:07:32.252 Read Recovery Levels: Not Supported 00:07:32.252 Endurance Groups: Not Supported 00:07:32.252 Predictable Latency Mode: Not Supported 00:07:32.252 Traffic Based Keep ALive: Not Supported 00:07:32.252 Namespace Granularity: Not Supported 00:07:32.252 SQ Associations: Not Supported 00:07:32.252 UUID List: Not Supported 00:07:32.252 Multi-Domain Subsystem: Not Supported 00:07:32.252 Fixed Capacity Management: Not Supported 00:07:32.252 Variable Capacity Management: Not Supported 00:07:32.252 Delete Endurance Group: Not Supported 00:07:32.252 Delete NVM Set: Not Supported 00:07:32.252 Extended LBA Formats Supported: Supported 00:07:32.252 Flexible Data Placement Supported: Not Supported 00:07:32.252 00:07:32.252 Controller Memory Buffer Support 00:07:32.252 ================================ 00:07:32.252 Supported: No 00:07:32.252 00:07:32.252 Persistent Memory Region Support 00:07:32.252 ================================ 00:07:32.252 Supported: No 00:07:32.252 00:07:32.252 Admin Command Set Attributes 00:07:32.252 ============================ 00:07:32.252 Security Send/Receive: Not Supported 00:07:32.252 Format NVM: Supported 00:07:32.252 Firmware Activate/Download: Not Supported 00:07:32.252 Namespace Management: Supported 00:07:32.252 Device Self-Test: Not Supported 00:07:32.252 Directives: Supported 00:07:32.252 NVMe-MI: Not Supported 00:07:32.252 Virtualization Management: Not Supported 00:07:32.252 Doorbell Buffer Config: Supported 00:07:32.252 Get LBA Status Capability: Not Supported 00:07:32.252 Command & Feature Lockdown Capability: Not Supported 00:07:32.252 Abort Command Limit: 4 00:07:32.252 Async Event Request Limit: 4 00:07:32.252 Number of Firmware Slots: N/A 00:07:32.252 Firmware Slot 1 Read-Only: N/A 00:07:32.252 Firmware Activation Without Reset: N/A 00:07:32.252 Multiple Update Detection Support: N/A 00:07:32.252 Firmware Update Granularity: No Information Provided 00:07:32.252 Per-Namespace SMART Log: Yes 00:07:32.252 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.252 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.252 Command Effects Log Page: Supported 00:07:32.252 Get Log Page Extended Data: Supported 00:07:32.252 Telemetry Log Pages: Not Supported 00:07:32.252 Persistent Event Log Pages: Not Supported 00:07:32.252 Supported Log Pages Log Page: May Support 00:07:32.252 Commands Supported & Effects Log Page: Not Supported 00:07:32.252 Feature Identifiers & Effects Log Page:May Support 00:07:32.252 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.252 Data Area 4 for Telemetry Log: Not Supported 00:07:32.252 Error Log Page Entries Supported: 1 00:07:32.252 Keep Alive: Not Supported 00:07:32.252 00:07:32.252 NVM Command Set Attributes 00:07:32.252 ========================== 00:07:32.252 Submission Queue Entry Size 00:07:32.252 Max: 64 00:07:32.252 Min: 64 00:07:32.252 Completion Queue Entry Size 00:07:32.252 Max: 16 00:07:32.252 Min: 16 00:07:32.252 Number of Namespaces: 256 00:07:32.252 Compare Command: Supported 00:07:32.252 Write Uncorrectable Command: Not Supported 00:07:32.252 Dataset Management Command: Supported 00:07:32.252 Write Zeroes Command: Supported 00:07:32.252 Set Features Save Field: Supported 00:07:32.252 Reservations: Not Supported 00:07:32.252 Timestamp: Supported 00:07:32.252 Copy: Supported 00:07:32.252 Volatile Write Cache: Present 00:07:32.252 Atomic Write Unit (Normal): 1 00:07:32.252 Atomic Write Unit (PFail): 1 00:07:32.252 Atomic Compare & Write Unit: 1 00:07:32.252 Fused Compare & Write: Not Supported 00:07:32.252 Scatter-Gather List 00:07:32.252 SGL Command Set: Supported 00:07:32.252 SGL Keyed: Not Supported 00:07:32.252 SGL Bit Bucket Descriptor: Not Supported 00:07:32.252 SGL Metadata Pointer: Not Supported 00:07:32.252 Oversized SGL: Not Supported 00:07:32.252 SGL Metadata Address: Not Supported 00:07:32.252 SGL Offset: Not Supported 00:07:32.252 Transport SGL Data Block: Not Supported 00:07:32.252 Replay Protected Memory Block: Not Supported 00:07:32.252 00:07:32.252 Firmware Slot Information 00:07:32.252 ========================= 00:07:32.252 Active slot: 1 00:07:32.252 Slot 1 Firmware Revision: 1.0 00:07:32.252 00:07:32.252 00:07:32.252 Commands Supported and Effects 00:07:32.252 ============================== 00:07:32.252 Admin Commands 00:07:32.252 -------------- 00:07:32.252 Delete I/O Submission Queue (00h): Supported 00:07:32.252 Create I/O Submission Queue (01h): Supported 00:07:32.252 Get Log Page (02h): Supported 00:07:32.252 Delete I/O Completion Queue (04h): Supported 00:07:32.252 Create I/O Completion Queue (05h): Supported 00:07:32.252 Identify (06h): Supported 00:07:32.252 Abort (08h): Supported 00:07:32.252 Set Features (09h): Supported 00:07:32.252 Get Features (0Ah): Supported 00:07:32.252 Asynchronous Event Request (0Ch): Supported 00:07:32.252 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.252 Directive Send (19h): Supported 00:07:32.252 Directive Receive (1Ah): Supported 00:07:32.252 Virtualization Management (1Ch): Supported 00:07:32.252 Doorbell Buffer Config (7Ch): Supported 00:07:32.252 Format NVM (80h): Supported LBA-Change 00:07:32.252 I/O Commands 00:07:32.252 ------------ 00:07:32.252 Flush (00h): Supported LBA-Change 00:07:32.252 Write (01h): Supported LBA-Change 00:07:32.252 Read (02h): Supported 00:07:32.252 Compare (05h): Supported 00:07:32.252 Write Zeroes (08h): Supported LBA-Change 00:07:32.252 Dataset Management (09h): Supported LBA-Change 00:07:32.252 Unknown (0Ch): Supported 00:07:32.252 Unknown (12h): Supported 00:07:32.253 Copy (19h): Supported LBA-Change 00:07:32.253 Unknown (1Dh): Supported LBA-Change 00:07:32.253 00:07:32.253 Error Log 00:07:32.253 ========= 00:07:32.253 00:07:32.253 Arbitration 00:07:32.253 =========== 00:07:32.253 Arbitration Burst: no limit 00:07:32.253 00:07:32.253 Power Management 00:07:32.253 ================ 00:07:32.253 Number of Power States: 1 00:07:32.253 Current Power State: Power State #0 00:07:32.253 Power State #0: 00:07:32.253 Max Power: 25.00 W 00:07:32.253 Non-Operational State: Operational 00:07:32.253 Entry Latency: 16 microseconds 00:07:32.253 Exit Latency: 4 microseconds 00:07:32.253 Relative Read Throughput: 0 00:07:32.253 Relative Read Latency: 0 00:07:32.253 Relative Write Throughput: 0 00:07:32.253 Relative Write Latency: 0 00:07:32.253 Idle Power: Not Reported 00:07:32.253 Active Power: Not Reported 00:07:32.253 Non-Operational Permissive Mode: Not Supported 00:07:32.253 00:07:32.253 Health Information 00:07:32.253 ================== 00:07:32.253 Critical Warnings: 00:07:32.253 Available Spare Space: OK 00:07:32.253 Temperature: [2024-12-06 20:34:49.190770] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62802 terminated unexpected 00:07:32.253 OK 00:07:32.253 Device Reliability: OK 00:07:32.253 Read Only: No 00:07:32.253 Volatile Memory Backup: OK 00:07:32.253 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.253 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.253 Available Spare: 0% 00:07:32.253 Available Spare Threshold: 0% 00:07:32.253 Life Percentage Used: 0% 00:07:32.253 Data Units Read: 1048 00:07:32.253 Data Units Written: 915 00:07:32.253 Host Read Commands: 54981 00:07:32.253 Host Write Commands: 53778 00:07:32.253 Controller Busy Time: 0 minutes 00:07:32.253 Power Cycles: 0 00:07:32.253 Power On Hours: 0 hours 00:07:32.253 Unsafe Shutdowns: 0 00:07:32.253 Unrecoverable Media Errors: 0 00:07:32.253 Lifetime Error Log Entries: 0 00:07:32.253 Warning Temperature Time: 0 minutes 00:07:32.253 Critical Temperature Time: 0 minutes 00:07:32.253 00:07:32.253 Number of Queues 00:07:32.253 ================ 00:07:32.253 Number of I/O Submission Queues: 64 00:07:32.253 Number of I/O Completion Queues: 64 00:07:32.253 00:07:32.253 ZNS Specific Controller Data 00:07:32.253 ============================ 00:07:32.253 Zone Append Size Limit: 0 00:07:32.253 00:07:32.253 00:07:32.253 Active Namespaces 00:07:32.253 ================= 00:07:32.253 Namespace ID:1 00:07:32.253 Error Recovery Timeout: Unlimited 00:07:32.253 Command Set Identifier: NVM (00h) 00:07:32.253 Deallocate: Supported 00:07:32.253 Deallocated/Unwritten Error: Supported 00:07:32.253 Deallocated Read Value: All 0x00 00:07:32.253 Deallocate in Write Zeroes: Not Supported 00:07:32.253 Deallocated Guard Field: 0xFFFF 00:07:32.253 Flush: Supported 00:07:32.253 Reservation: Not Supported 00:07:32.253 Namespace Sharing Capabilities: Private 00:07:32.253 Size (in LBAs): 1310720 (5GiB) 00:07:32.253 Capacity (in LBAs): 1310720 (5GiB) 00:07:32.253 Utilization (in LBAs): 1310720 (5GiB) 00:07:32.253 Thin Provisioning: Not Supported 00:07:32.253 Per-NS Atomic Units: No 00:07:32.253 Maximum Single Source Range Length: 128 00:07:32.253 Maximum Copy Length: 128 00:07:32.253 Maximum Source Range Count: 128 00:07:32.253 NGUID/EUI64 Never Reused: No 00:07:32.253 Namespace Write Protected: No 00:07:32.253 Number of LBA Formats: 8 00:07:32.253 Current LBA Format: LBA Format #04 00:07:32.253 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.253 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.253 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.253 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.253 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.253 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.253 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.253 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.253 00:07:32.253 NVM Specific Namespace Data 00:07:32.253 =========================== 00:07:32.253 Logical Block Storage Tag Mask: 0 00:07:32.253 Protection Information Capabilities: 00:07:32.253 16b Guard Protection Information Storage Tag Support: No 00:07:32.253 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.253 Storage Tag Check Read Support: No 00:07:32.253 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.253 ===================================================== 00:07:32.253 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:32.253 ===================================================== 00:07:32.253 Controller Capabilities/Features 00:07:32.253 ================================ 00:07:32.253 Vendor ID: 1b36 00:07:32.253 Subsystem Vendor ID: 1af4 00:07:32.253 Serial Number: 12343 00:07:32.253 Model Number: QEMU NVMe Ctrl 00:07:32.253 Firmware Version: 8.0.0 00:07:32.253 Recommended Arb Burst: 6 00:07:32.253 IEEE OUI Identifier: 00 54 52 00:07:32.253 Multi-path I/O 00:07:32.253 May have multiple subsystem ports: No 00:07:32.253 May have multiple controllers: Yes 00:07:32.253 Associated with SR-IOV VF: No 00:07:32.253 Max Data Transfer Size: 524288 00:07:32.253 Max Number of Namespaces: 256 00:07:32.253 Max Number of I/O Queues: 64 00:07:32.253 NVMe Specification Version (VS): 1.4 00:07:32.253 NVMe Specification Version (Identify): 1.4 00:07:32.253 Maximum Queue Entries: 2048 00:07:32.253 Contiguous Queues Required: Yes 00:07:32.253 Arbitration Mechanisms Supported 00:07:32.253 Weighted Round Robin: Not Supported 00:07:32.253 Vendor Specific: Not Supported 00:07:32.253 Reset Timeout: 7500 ms 00:07:32.253 Doorbell Stride: 4 bytes 00:07:32.253 NVM Subsystem Reset: Not Supported 00:07:32.253 Command Sets Supported 00:07:32.253 NVM Command Set: Supported 00:07:32.253 Boot Partition: Not Supported 00:07:32.253 Memory Page Size Minimum: 4096 bytes 00:07:32.253 Memory Page Size Maximum: 65536 bytes 00:07:32.253 Persistent Memory Region: Not Supported 00:07:32.253 Optional Asynchronous Events Supported 00:07:32.253 Namespace Attribute Notices: Supported 00:07:32.253 Firmware Activation Notices: Not Supported 00:07:32.253 ANA Change Notices: Not Supported 00:07:32.253 PLE Aggregate Log Change Notices: Not Supported 00:07:32.253 LBA Status Info Alert Notices: Not Supported 00:07:32.253 EGE Aggregate Log Change Notices: Not Supported 00:07:32.253 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.253 Zone Descriptor Change Notices: Not Supported 00:07:32.253 Discovery Log Change Notices: Not Supported 00:07:32.253 Controller Attributes 00:07:32.253 128-bit Host Identifier: Not Supported 00:07:32.253 Non-Operational Permissive Mode: Not Supported 00:07:32.253 NVM Sets: Not Supported 00:07:32.253 Read Recovery Levels: Not Supported 00:07:32.253 Endurance Groups: Supported 00:07:32.253 Predictable Latency Mode: Not Supported 00:07:32.253 Traffic Based Keep ALive: Not Supported 00:07:32.253 Namespace Granularity: Not Supported 00:07:32.253 SQ Associations: Not Supported 00:07:32.253 UUID List: Not Supported 00:07:32.253 Multi-Domain Subsystem: Not Supported 00:07:32.253 Fixed Capacity Management: Not Supported 00:07:32.253 Variable Capacity Management: Not Supported 00:07:32.254 Delete Endurance Group: Not Supported 00:07:32.254 Delete NVM Set: Not Supported 00:07:32.254 Extended LBA Formats Supported: Supported 00:07:32.254 Flexible Data Placement Supported: Supported 00:07:32.254 00:07:32.254 Controller Memory Buffer Support 00:07:32.254 ================================ 00:07:32.254 Supported: No 00:07:32.254 00:07:32.254 Persistent Memory Region Support 00:07:32.254 ================================ 00:07:32.254 Supported: No 00:07:32.254 00:07:32.254 Admin Command Set Attributes 00:07:32.254 ============================ 00:07:32.254 Security Send/Receive: Not Supported 00:07:32.254 Format NVM: Supported 00:07:32.254 Firmware Activate/Download: Not Supported 00:07:32.254 Namespace Management: Supported 00:07:32.254 Device Self-Test: Not Supported 00:07:32.254 Directives: Supported 00:07:32.254 NVMe-MI: Not Supported 00:07:32.254 Virtualization Management: Not Supported 00:07:32.254 Doorbell Buffer Config: Supported 00:07:32.254 Get LBA Status Capability: Not Supported 00:07:32.254 Command & Feature Lockdown Capability: Not Supported 00:07:32.254 Abort Command Limit: 4 00:07:32.254 Async Event Request Limit: 4 00:07:32.254 Number of Firmware Slots: N/A 00:07:32.254 Firmware Slot 1 Read-Only: N/A 00:07:32.254 Firmware Activation Without Reset: N/A 00:07:32.254 Multiple Update Detection Support: N/A 00:07:32.254 Firmware Update Granularity: No Information Provided 00:07:32.254 Per-Namespace SMART Log: Yes 00:07:32.254 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.254 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:32.254 Command Effects Log Page: Supported 00:07:32.254 Get Log Page Extended Data: Supported 00:07:32.254 Telemetry Log Pages: Not Supported 00:07:32.254 Persistent Event Log Pages: Not Supported 00:07:32.254 Supported Log Pages Log Page: May Support 00:07:32.254 Commands Supported & Effects Log Page: Not Supported 00:07:32.254 Feature Identifiers & Effects Log Page:May Support 00:07:32.254 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.254 Data Area 4 for Telemetry Log: Not Supported 00:07:32.254 Error Log Page Entries Supported: 1 00:07:32.254 Keep Alive: Not Supported 00:07:32.254 00:07:32.254 NVM Command Set Attributes 00:07:32.254 ========================== 00:07:32.254 Submission Queue Entry Size 00:07:32.254 Max: 64 00:07:32.254 Min: 64 00:07:32.254 Completion Queue Entry Size 00:07:32.254 Max: 16 00:07:32.254 Min: 16 00:07:32.254 Number of Namespaces: 256 00:07:32.254 Compare Command: Supported 00:07:32.254 Write Uncorrectable Command: Not Supported 00:07:32.254 Dataset Management Command: Supported 00:07:32.254 Write Zeroes Command: Supported 00:07:32.254 Set Features Save Field: Supported 00:07:32.254 Reservations: Not Supported 00:07:32.254 Timestamp: Supported 00:07:32.254 Copy: Supported 00:07:32.254 Volatile Write Cache: Present 00:07:32.254 Atomic Write Unit (Normal): 1 00:07:32.254 Atomic Write Unit (PFail): 1 00:07:32.254 Atomic Compare & Write Unit: 1 00:07:32.254 Fused Compare & Write: Not Supported 00:07:32.254 Scatter-Gather List 00:07:32.254 SGL Command Set: Supported 00:07:32.254 SGL Keyed: Not Supported 00:07:32.254 SGL Bit Bucket Descriptor: Not Supported 00:07:32.254 SGL Metadata Pointer: Not Supported 00:07:32.254 Oversized SGL: Not Supported 00:07:32.254 SGL Metadata Address: Not Supported 00:07:32.254 SGL Offset: Not Supported 00:07:32.254 Transport SGL Data Block: Not Supported 00:07:32.254 Replay Protected Memory Block: Not Supported 00:07:32.254 00:07:32.254 Firmware Slot Information 00:07:32.254 ========================= 00:07:32.254 Active slot: 1 00:07:32.254 Slot 1 Firmware Revision: 1.0 00:07:32.254 00:07:32.254 00:07:32.254 Commands Supported and Effects 00:07:32.254 ============================== 00:07:32.254 Admin Commands 00:07:32.254 -------------- 00:07:32.254 Delete I/O Submission Queue (00h): Supported 00:07:32.254 Create I/O Submission Queue (01h): Supported 00:07:32.254 Get Log Page (02h): Supported 00:07:32.254 Delete I/O Completion Queue (04h): Supported 00:07:32.254 Create I/O Completion Queue (05h): Supported 00:07:32.254 Identify (06h): Supported 00:07:32.254 Abort (08h): Supported 00:07:32.254 Set Features (09h): Supported 00:07:32.254 Get Features (0Ah): Supported 00:07:32.254 Asynchronous Event Request (0Ch): Supported 00:07:32.254 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.254 Directive Send (19h): Supported 00:07:32.254 Directive Receive (1Ah): Supported 00:07:32.254 Virtualization Management (1Ch): Supported 00:07:32.254 Doorbell Buffer Config (7Ch): Supported 00:07:32.254 Format NVM (80h): Supported LBA-Change 00:07:32.254 I/O Commands 00:07:32.254 ------------ 00:07:32.254 Flush (00h): Supported LBA-Change 00:07:32.254 Write (01h): Supported LBA-Change 00:07:32.254 Read (02h): Supported 00:07:32.254 Compare (05h): Supported 00:07:32.254 Write Zeroes (08h): Supported LBA-Change 00:07:32.254 Dataset Management (09h): Supported LBA-Change 00:07:32.254 Unknown (0Ch): Supported 00:07:32.254 Unknown (12h): Supported 00:07:32.254 Copy (19h): Supported LBA-Change 00:07:32.254 Unknown (1Dh): Supported LBA-Change 00:07:32.254 00:07:32.254 Error Log 00:07:32.254 ========= 00:07:32.254 00:07:32.254 Arbitration 00:07:32.254 =========== 00:07:32.254 Arbitration Burst: no limit 00:07:32.254 00:07:32.254 Power Management 00:07:32.254 ================ 00:07:32.254 Number of Power States: 1 00:07:32.254 Current Power State: Power State #0 00:07:32.254 Power State #0: 00:07:32.254 Max Power: 25.00 W 00:07:32.254 Non-Operational State: Operational 00:07:32.254 Entry Latency: 16 microseconds 00:07:32.254 Exit Latency: 4 microseconds 00:07:32.254 Relative Read Throughput: 0 00:07:32.254 Relative Read Latency: 0 00:07:32.254 Relative Write Throughput: 0 00:07:32.254 Relative Write Latency: 0 00:07:32.254 Idle Power: Not Reported 00:07:32.254 Active Power: Not Reported 00:07:32.254 Non-Operational Permissive Mode: Not Supported 00:07:32.254 00:07:32.254 Health Information 00:07:32.254 ================== 00:07:32.254 Critical Warnings: 00:07:32.254 Available Spare Space: OK 00:07:32.254 Temperature: OK 00:07:32.254 Device Reliability: OK 00:07:32.254 Read Only: No 00:07:32.254 Volatile Memory Backup: OK 00:07:32.254 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.254 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.254 Available Spare: 0% 00:07:32.254 Available Spare Threshold: 0% 00:07:32.254 Life Percentage Used: 0% 00:07:32.254 Data Units Read: 947 00:07:32.254 Data Units Written: 876 00:07:32.254 Host Read Commands: 39092 00:07:32.254 Host Write Commands: 38515 00:07:32.254 Controller Busy Time: 0 minutes 00:07:32.254 Power Cycles: 0 00:07:32.254 Power On Hours: 0 hours 00:07:32.254 Unsafe Shutdowns: 0 00:07:32.254 Unrecoverable Media Errors: 0 00:07:32.254 Lifetime Error Log Entries: 0 00:07:32.254 Warning Temperature Time: 0 minutes 00:07:32.254 Critical Temperature Time: 0 minutes 00:07:32.254 00:07:32.254 Number of Queues 00:07:32.254 ================ 00:07:32.254 Number of I/O Submission Queues: 64 00:07:32.254 Number of I/O Completion Queues: 64 00:07:32.254 00:07:32.254 ZNS Specific Controller Data 00:07:32.254 ============================ 00:07:32.254 Zone Append Size Limit: 0 00:07:32.254 00:07:32.254 00:07:32.254 Active Namespaces 00:07:32.254 ================= 00:07:32.254 Namespace ID:1 00:07:32.254 Error Recovery Timeout: Unlimited 00:07:32.254 Command Set Identifier: NVM (00h) 00:07:32.254 Deallocate: Supported 00:07:32.254 Deallocated/Unwritten Error: Supported 00:07:32.254 Deallocated Read Value: All 0x00 00:07:32.254 Deallocate in Write Zeroes: Not Supported 00:07:32.254 Deallocated Guard Field: 0xFFFF 00:07:32.254 Flush: Supported 00:07:32.254 Reservation: Not Supported 00:07:32.254 Namespace Sharing Capabilities: Multiple Controllers 00:07:32.254 Size (in LBAs): 262144 (1GiB) 00:07:32.254 Capacity (in LBAs): 262144 (1GiB) 00:07:32.254 Utilization (in LBAs): 262144 (1GiB) 00:07:32.254 Thin Provisioning: Not Supported 00:07:32.254 Per-NS Atomic Units: No 00:07:32.254 Maximum Single Source Range Length: 128 00:07:32.254 Maximum Copy Length: 128 00:07:32.254 Maximum Source Range Count: 128 00:07:32.254 NGUID/EUI64 Never Reused: No 00:07:32.254 Namespace Write Protected: No 00:07:32.254 Endurance group ID: 1 00:07:32.254 Number of LBA Formats: 8 00:07:32.254 Current LBA Format: LBA Format #04 00:07:32.254 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.254 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.254 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.255 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.255 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.255 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.255 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.255 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.255 00:07:32.255 Get Feature FDP: 00:07:32.255 ================ 00:07:32.255 Enabled: Yes 00:07:32.255 FDP configuration index: 0 00:07:32.255 00:07:32.255 FDP configurations log page 00:07:32.255 =========================== 00:07:32.255 Number of FDP configurations: 1 00:07:32.255 Version: 0 00:07:32.255 Size: 112 00:07:32.255 FDP Configuration Descriptor: 0 00:07:32.255 Descriptor Size: 96 00:07:32.255 Reclaim Group Identifier format: 2 00:07:32.255 FDP Volatile Write Cache: Not Present 00:07:32.255 FDP Configuration: Valid 00:07:32.255 Vendor Specific Size: 0 00:07:32.255 Number of Reclaim Groups: 2 00:07:32.255 Number of Recalim Unit Handles: 8 00:07:32.255 Max Placement Identifiers: 128 00:07:32.255 Number of Namespaces Suppprted: 256 00:07:32.255 Reclaim unit Nominal Size: 6000000 bytes 00:07:32.255 Estimated Reclaim Unit Time Limit: Not Reported 00:07:32.255 RUH Desc #000: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #001: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #002: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #003: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #004: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #005: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #006: RUH Type: Initially Isolated 00:07:32.255 RUH Desc #007: RUH Type: Initially Isolated 00:07:32.255 00:07:32.255 FDP reclaim unit handle usage log page 00:07:32.255 ====================================== 00:07:32.255 Number of Reclaim Unit Handles: 8 00:07:32.255 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:32.255 RUH Usage Desc #001: RUH Attributes: Unused 00:07:32.255 RUH Usage Desc #002: RUH Attributes: Unused 00:07:32.255 RUH Usage Desc #003: RUH Attributes: Unused 00:07:32.255 RUH Usage Desc #004: RUH Attributes: Unused 00:07:32.255 RUH Usage Desc #005: RUH Attributes: Unused 00:07:32.255 RUH Usage Desc #006: RUH Attributes: Unused 00:07:32.255 RUH Usage Desc #007: RUH Attributes: Unused 00:07:32.255 00:07:32.255 FDP statistics log page 00:07:32.255 ======================= 00:07:32.255 Host bytes with metadata written: 545824768 00:07:32.255 Medi[2024-12-06 20:34:49.191806] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62802 terminated unexpected 00:07:32.255 a bytes with metadata written: 548851712 00:07:32.255 Media bytes erased: 0 00:07:32.255 00:07:32.255 FDP events log page 00:07:32.255 =================== 00:07:32.255 Number of FDP events: 0 00:07:32.255 00:07:32.255 NVM Specific Namespace Data 00:07:32.255 =========================== 00:07:32.255 Logical Block Storage Tag Mask: 0 00:07:32.255 Protection Information Capabilities: 00:07:32.255 16b Guard Protection Information Storage Tag Support: No 00:07:32.255 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.255 Storage Tag Check Read Support: No 00:07:32.255 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.255 ===================================================== 00:07:32.255 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:32.255 ===================================================== 00:07:32.255 Controller Capabilities/Features 00:07:32.255 ================================ 00:07:32.255 Vendor ID: 1b36 00:07:32.255 Subsystem Vendor ID: 1af4 00:07:32.255 Serial Number: 12342 00:07:32.255 Model Number: QEMU NVMe Ctrl 00:07:32.255 Firmware Version: 8.0.0 00:07:32.255 Recommended Arb Burst: 6 00:07:32.255 IEEE OUI Identifier: 00 54 52 00:07:32.255 Multi-path I/O 00:07:32.255 May have multiple subsystem ports: No 00:07:32.255 May have multiple controllers: No 00:07:32.255 Associated with SR-IOV VF: No 00:07:32.255 Max Data Transfer Size: 524288 00:07:32.255 Max Number of Namespaces: 256 00:07:32.255 Max Number of I/O Queues: 64 00:07:32.255 NVMe Specification Version (VS): 1.4 00:07:32.255 NVMe Specification Version (Identify): 1.4 00:07:32.255 Maximum Queue Entries: 2048 00:07:32.255 Contiguous Queues Required: Yes 00:07:32.255 Arbitration Mechanisms Supported 00:07:32.255 Weighted Round Robin: Not Supported 00:07:32.255 Vendor Specific: Not Supported 00:07:32.255 Reset Timeout: 7500 ms 00:07:32.255 Doorbell Stride: 4 bytes 00:07:32.255 NVM Subsystem Reset: Not Supported 00:07:32.255 Command Sets Supported 00:07:32.255 NVM Command Set: Supported 00:07:32.255 Boot Partition: Not Supported 00:07:32.255 Memory Page Size Minimum: 4096 bytes 00:07:32.255 Memory Page Size Maximum: 65536 bytes 00:07:32.255 Persistent Memory Region: Not Supported 00:07:32.255 Optional Asynchronous Events Supported 00:07:32.255 Namespace Attribute Notices: Supported 00:07:32.255 Firmware Activation Notices: Not Supported 00:07:32.255 ANA Change Notices: Not Supported 00:07:32.255 PLE Aggregate Log Change Notices: Not Supported 00:07:32.255 LBA Status Info Alert Notices: Not Supported 00:07:32.255 EGE Aggregate Log Change Notices: Not Supported 00:07:32.255 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.255 Zone Descriptor Change Notices: Not Supported 00:07:32.255 Discovery Log Change Notices: Not Supported 00:07:32.255 Controller Attributes 00:07:32.255 128-bit Host Identifier: Not Supported 00:07:32.255 Non-Operational Permissive Mode: Not Supported 00:07:32.255 NVM Sets: Not Supported 00:07:32.255 Read Recovery Levels: Not Supported 00:07:32.255 Endurance Groups: Not Supported 00:07:32.255 Predictable Latency Mode: Not Supported 00:07:32.255 Traffic Based Keep ALive: Not Supported 00:07:32.255 Namespace Granularity: Not Supported 00:07:32.255 SQ Associations: Not Supported 00:07:32.255 UUID List: Not Supported 00:07:32.255 Multi-Domain Subsystem: Not Supported 00:07:32.255 Fixed Capacity Management: Not Supported 00:07:32.255 Variable Capacity Management: Not Supported 00:07:32.255 Delete Endurance Group: Not Supported 00:07:32.255 Delete NVM Set: Not Supported 00:07:32.255 Extended LBA Formats Supported: Supported 00:07:32.255 Flexible Data Placement Supported: Not Supported 00:07:32.255 00:07:32.255 Controller Memory Buffer Support 00:07:32.255 ================================ 00:07:32.255 Supported: No 00:07:32.255 00:07:32.255 Persistent Memory Region Support 00:07:32.255 ================================ 00:07:32.255 Supported: No 00:07:32.255 00:07:32.255 Admin Command Set Attributes 00:07:32.255 ============================ 00:07:32.255 Security Send/Receive: Not Supported 00:07:32.255 Format NVM: Supported 00:07:32.255 Firmware Activate/Download: Not Supported 00:07:32.255 Namespace Management: Supported 00:07:32.255 Device Self-Test: Not Supported 00:07:32.255 Directives: Supported 00:07:32.255 NVMe-MI: Not Supported 00:07:32.255 Virtualization Management: Not Supported 00:07:32.255 Doorbell Buffer Config: Supported 00:07:32.255 Get LBA Status Capability: Not Supported 00:07:32.256 Command & Feature Lockdown Capability: Not Supported 00:07:32.256 Abort Command Limit: 4 00:07:32.256 Async Event Request Limit: 4 00:07:32.256 Number of Firmware Slots: N/A 00:07:32.256 Firmware Slot 1 Read-Only: N/A 00:07:32.256 Firmware Activation Without Reset: N/A 00:07:32.256 Multiple Update Detection Support: N/A 00:07:32.256 Firmware Update Granularity: No Information Provided 00:07:32.256 Per-Namespace SMART Log: Yes 00:07:32.256 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.256 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:32.256 Command Effects Log Page: Supported 00:07:32.256 Get Log Page Extended Data: Supported 00:07:32.256 Telemetry Log Pages: Not Supported 00:07:32.256 Persistent Event Log Pages: Not Supported 00:07:32.256 Supported Log Pages Log Page: May Support 00:07:32.256 Commands Supported & Effects Log Page: Not Supported 00:07:32.256 Feature Identifiers & Effects Log Page:May Support 00:07:32.256 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.256 Data Area 4 for Telemetry Log: Not Supported 00:07:32.256 Error Log Page Entries Supported: 1 00:07:32.256 Keep Alive: Not Supported 00:07:32.256 00:07:32.256 NVM Command Set Attributes 00:07:32.256 ========================== 00:07:32.256 Submission Queue Entry Size 00:07:32.256 Max: 64 00:07:32.256 Min: 64 00:07:32.256 Completion Queue Entry Size 00:07:32.256 Max: 16 00:07:32.256 Min: 16 00:07:32.256 Number of Namespaces: 256 00:07:32.256 Compare Command: Supported 00:07:32.256 Write Uncorrectable Command: Not Supported 00:07:32.256 Dataset Management Command: Supported 00:07:32.256 Write Zeroes Command: Supported 00:07:32.256 Set Features Save Field: Supported 00:07:32.256 Reservations: Not Supported 00:07:32.256 Timestamp: Supported 00:07:32.256 Copy: Supported 00:07:32.256 Volatile Write Cache: Present 00:07:32.256 Atomic Write Unit (Normal): 1 00:07:32.256 Atomic Write Unit (PFail): 1 00:07:32.256 Atomic Compare & Write Unit: 1 00:07:32.256 Fused Compare & Write: Not Supported 00:07:32.256 Scatter-Gather List 00:07:32.256 SGL Command Set: Supported 00:07:32.256 SGL Keyed: Not Supported 00:07:32.256 SGL Bit Bucket Descriptor: Not Supported 00:07:32.256 SGL Metadata Pointer: Not Supported 00:07:32.256 Oversized SGL: Not Supported 00:07:32.256 SGL Metadata Address: Not Supported 00:07:32.256 SGL Offset: Not Supported 00:07:32.256 Transport SGL Data Block: Not Supported 00:07:32.256 Replay Protected Memory Block: Not Supported 00:07:32.256 00:07:32.256 Firmware Slot Information 00:07:32.256 ========================= 00:07:32.256 Active slot: 1 00:07:32.256 Slot 1 Firmware Revision: 1.0 00:07:32.256 00:07:32.256 00:07:32.256 Commands Supported and Effects 00:07:32.256 ============================== 00:07:32.256 Admin Commands 00:07:32.256 -------------- 00:07:32.256 Delete I/O Submission Queue (00h): Supported 00:07:32.256 Create I/O Submission Queue (01h): Supported 00:07:32.256 Get Log Page (02h): Supported 00:07:32.256 Delete I/O Completion Queue (04h): Supported 00:07:32.256 Create I/O Completion Queue (05h): Supported 00:07:32.256 Identify (06h): Supported 00:07:32.256 Abort (08h): Supported 00:07:32.256 Set Features (09h): Supported 00:07:32.256 Get Features (0Ah): Supported 00:07:32.256 Asynchronous Event Request (0Ch): Supported 00:07:32.256 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.256 Directive Send (19h): Supported 00:07:32.256 Directive Receive (1Ah): Supported 00:07:32.256 Virtualization Management (1Ch): Supported 00:07:32.256 Doorbell Buffer Config (7Ch): Supported 00:07:32.256 Format NVM (80h): Supported LBA-Change 00:07:32.256 I/O Commands 00:07:32.256 ------------ 00:07:32.256 Flush (00h): Supported LBA-Change 00:07:32.256 Write (01h): Supported LBA-Change 00:07:32.256 Read (02h): Supported 00:07:32.256 Compare (05h): Supported 00:07:32.256 Write Zeroes (08h): Supported LBA-Change 00:07:32.256 Dataset Management (09h): Supported LBA-Change 00:07:32.256 Unknown (0Ch): Supported 00:07:32.256 Unknown (12h): Supported 00:07:32.256 Copy (19h): Supported LBA-Change 00:07:32.256 Unknown (1Dh): Supported LBA-Change 00:07:32.256 00:07:32.256 Error Log 00:07:32.256 ========= 00:07:32.256 00:07:32.256 Arbitration 00:07:32.256 =========== 00:07:32.256 Arbitration Burst: no limit 00:07:32.256 00:07:32.256 Power Management 00:07:32.256 ================ 00:07:32.256 Number of Power States: 1 00:07:32.256 Current Power State: Power State #0 00:07:32.256 Power State #0: 00:07:32.256 Max Power: 25.00 W 00:07:32.256 Non-Operational State: Operational 00:07:32.256 Entry Latency: 16 microseconds 00:07:32.256 Exit Latency: 4 microseconds 00:07:32.256 Relative Read Throughput: 0 00:07:32.256 Relative Read Latency: 0 00:07:32.256 Relative Write Throughput: 0 00:07:32.256 Relative Write Latency: 0 00:07:32.256 Idle Power: Not Reported 00:07:32.256 Active Power: Not Reported 00:07:32.256 Non-Operational Permissive Mode: Not Supported 00:07:32.256 00:07:32.256 Health Information 00:07:32.256 ================== 00:07:32.256 Critical Warnings: 00:07:32.256 Available Spare Space: OK 00:07:32.256 Temperature: OK 00:07:32.256 Device Reliability: OK 00:07:32.256 Read Only: No 00:07:32.256 Volatile Memory Backup: OK 00:07:32.256 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.256 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.256 Available Spare: 0% 00:07:32.256 Available Spare Threshold: 0% 00:07:32.256 Life Percentage Used: 0% 00:07:32.256 Data Units Read: 2228 00:07:32.256 Data Units Written: 2015 00:07:32.256 Host Read Commands: 112503 00:07:32.256 Host Write Commands: 110772 00:07:32.256 Controller Busy Time: 0 minutes 00:07:32.256 Power Cycles: 0 00:07:32.256 Power On Hours: 0 hours 00:07:32.256 Unsafe Shutdowns: 0 00:07:32.256 Unrecoverable Media Errors: 0 00:07:32.256 Lifetime Error Log Entries: 0 00:07:32.256 Warning Temperature Time: 0 minutes 00:07:32.256 Critical Temperature Time: 0 minutes 00:07:32.256 00:07:32.256 Number of Queues 00:07:32.256 ================ 00:07:32.256 Number of I/O Submission Queues: 64 00:07:32.256 Number of I/O Completion Queues: 64 00:07:32.256 00:07:32.256 ZNS Specific Controller Data 00:07:32.256 ============================ 00:07:32.256 Zone Append Size Limit: 0 00:07:32.256 00:07:32.256 00:07:32.256 Active Namespaces 00:07:32.256 ================= 00:07:32.256 Namespace ID:1 00:07:32.256 Error Recovery Timeout: Unlimited 00:07:32.256 Command Set Identifier: NVM (00h) 00:07:32.256 Deallocate: Supported 00:07:32.256 Deallocated/Unwritten Error: Supported 00:07:32.256 Deallocated Read Value: All 0x00 00:07:32.256 Deallocate in Write Zeroes: Not Supported 00:07:32.256 Deallocated Guard Field: 0xFFFF 00:07:32.256 Flush: Supported 00:07:32.256 Reservation: Not Supported 00:07:32.256 Namespace Sharing Capabilities: Private 00:07:32.256 Size (in LBAs): 1048576 (4GiB) 00:07:32.256 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.256 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.256 Thin Provisioning: Not Supported 00:07:32.256 Per-NS Atomic Units: No 00:07:32.256 Maximum Single Source Range Length: 128 00:07:32.256 Maximum Copy Length: 128 00:07:32.256 Maximum Source Range Count: 128 00:07:32.256 NGUID/EUI64 Never Reused: No 00:07:32.256 Namespace Write Protected: No 00:07:32.256 Number of LBA Formats: 8 00:07:32.256 Current LBA Format: LBA Format #04 00:07:32.256 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.256 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.256 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.256 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.256 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.256 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.256 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.256 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.256 00:07:32.256 NVM Specific Namespace Data 00:07:32.256 =========================== 00:07:32.256 Logical Block Storage Tag Mask: 0 00:07:32.256 Protection Information Capabilities: 00:07:32.256 16b Guard Protection Information Storage Tag Support: No 00:07:32.256 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.256 Storage Tag Check Read Support: No 00:07:32.256 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.256 Namespace ID:2 00:07:32.257 Error Recovery Timeout: Unlimited 00:07:32.257 Command Set Identifier: NVM (00h) 00:07:32.257 Deallocate: Supported 00:07:32.257 Deallocated/Unwritten Error: Supported 00:07:32.257 Deallocated Read Value: All 0x00 00:07:32.257 Deallocate in Write Zeroes: Not Supported 00:07:32.257 Deallocated Guard Field: 0xFFFF 00:07:32.257 Flush: Supported 00:07:32.257 Reservation: Not Supported 00:07:32.257 Namespace Sharing Capabilities: Private 00:07:32.257 Size (in LBAs): 1048576 (4GiB) 00:07:32.257 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.257 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.257 Thin Provisioning: Not Supported 00:07:32.257 Per-NS Atomic Units: No 00:07:32.257 Maximum Single Source Range Length: 128 00:07:32.257 Maximum Copy Length: 128 00:07:32.257 Maximum Source Range Count: 128 00:07:32.257 NGUID/EUI64 Never Reused: No 00:07:32.257 Namespace Write Protected: No 00:07:32.257 Number of LBA Formats: 8 00:07:32.257 Current LBA Format: LBA Format #04 00:07:32.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.257 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.257 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.257 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.257 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.257 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.257 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.257 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.257 00:07:32.257 NVM Specific Namespace Data 00:07:32.257 =========================== 00:07:32.257 Logical Block Storage Tag Mask: 0 00:07:32.257 Protection Information Capabilities: 00:07:32.257 16b Guard Protection Information Storage Tag Support: No 00:07:32.257 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.257 Storage Tag Check Read Support: No 00:07:32.257 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Namespace ID:3 00:07:32.257 Error Recovery Timeout: Unlimited 00:07:32.257 Command Set Identifier: NVM (00h) 00:07:32.257 Deallocate: Supported 00:07:32.257 Deallocated/Unwritten Error: Supported 00:07:32.257 Deallocated Read Value: All 0x00 00:07:32.257 Deallocate in Write Zeroes: Not Supported 00:07:32.257 Deallocated Guard Field: 0xFFFF 00:07:32.257 Flush: Supported 00:07:32.257 Reservation: Not Supported 00:07:32.257 Namespace Sharing Capabilities: Private 00:07:32.257 Size (in LBAs): 1048576 (4GiB) 00:07:32.257 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.257 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.257 Thin Provisioning: Not Supported 00:07:32.257 Per-NS Atomic Units: No 00:07:32.257 Maximum Single Source Range Length: 128 00:07:32.257 Maximum Copy Length: 128 00:07:32.257 Maximum Source Range Count: 128 00:07:32.257 NGUID/EUI64 Never Reused: No 00:07:32.257 Namespace Write Protected: No 00:07:32.257 Number of LBA Formats: 8 00:07:32.257 Current LBA Format: LBA Format #04 00:07:32.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.257 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.257 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.257 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.257 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.257 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.257 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.257 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.257 00:07:32.257 NVM Specific Namespace Data 00:07:32.257 =========================== 00:07:32.257 Logical Block Storage Tag Mask: 0 00:07:32.257 Protection Information Capabilities: 00:07:32.257 16b Guard Protection Information Storage Tag Support: No 00:07:32.257 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.257 Storage Tag Check Read Support: No 00:07:32.257 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.257 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.257 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:32.516 ===================================================== 00:07:32.516 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.516 ===================================================== 00:07:32.516 Controller Capabilities/Features 00:07:32.516 ================================ 00:07:32.516 Vendor ID: 1b36 00:07:32.516 Subsystem Vendor ID: 1af4 00:07:32.516 Serial Number: 12340 00:07:32.517 Model Number: QEMU NVMe Ctrl 00:07:32.517 Firmware Version: 8.0.0 00:07:32.517 Recommended Arb Burst: 6 00:07:32.517 IEEE OUI Identifier: 00 54 52 00:07:32.517 Multi-path I/O 00:07:32.517 May have multiple subsystem ports: No 00:07:32.517 May have multiple controllers: No 00:07:32.517 Associated with SR-IOV VF: No 00:07:32.517 Max Data Transfer Size: 524288 00:07:32.517 Max Number of Namespaces: 256 00:07:32.517 Max Number of I/O Queues: 64 00:07:32.517 NVMe Specification Version (VS): 1.4 00:07:32.517 NVMe Specification Version (Identify): 1.4 00:07:32.517 Maximum Queue Entries: 2048 00:07:32.517 Contiguous Queues Required: Yes 00:07:32.517 Arbitration Mechanisms Supported 00:07:32.517 Weighted Round Robin: Not Supported 00:07:32.517 Vendor Specific: Not Supported 00:07:32.517 Reset Timeout: 7500 ms 00:07:32.517 Doorbell Stride: 4 bytes 00:07:32.517 NVM Subsystem Reset: Not Supported 00:07:32.517 Command Sets Supported 00:07:32.517 NVM Command Set: Supported 00:07:32.517 Boot Partition: Not Supported 00:07:32.517 Memory Page Size Minimum: 4096 bytes 00:07:32.517 Memory Page Size Maximum: 65536 bytes 00:07:32.517 Persistent Memory Region: Not Supported 00:07:32.517 Optional Asynchronous Events Supported 00:07:32.517 Namespace Attribute Notices: Supported 00:07:32.517 Firmware Activation Notices: Not Supported 00:07:32.517 ANA Change Notices: Not Supported 00:07:32.517 PLE Aggregate Log Change Notices: Not Supported 00:07:32.517 LBA Status Info Alert Notices: Not Supported 00:07:32.517 EGE Aggregate Log Change Notices: Not Supported 00:07:32.517 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.517 Zone Descriptor Change Notices: Not Supported 00:07:32.517 Discovery Log Change Notices: Not Supported 00:07:32.517 Controller Attributes 00:07:32.517 128-bit Host Identifier: Not Supported 00:07:32.517 Non-Operational Permissive Mode: Not Supported 00:07:32.517 NVM Sets: Not Supported 00:07:32.517 Read Recovery Levels: Not Supported 00:07:32.517 Endurance Groups: Not Supported 00:07:32.517 Predictable Latency Mode: Not Supported 00:07:32.517 Traffic Based Keep ALive: Not Supported 00:07:32.517 Namespace Granularity: Not Supported 00:07:32.517 SQ Associations: Not Supported 00:07:32.517 UUID List: Not Supported 00:07:32.517 Multi-Domain Subsystem: Not Supported 00:07:32.517 Fixed Capacity Management: Not Supported 00:07:32.517 Variable Capacity Management: Not Supported 00:07:32.517 Delete Endurance Group: Not Supported 00:07:32.517 Delete NVM Set: Not Supported 00:07:32.517 Extended LBA Formats Supported: Supported 00:07:32.517 Flexible Data Placement Supported: Not Supported 00:07:32.517 00:07:32.517 Controller Memory Buffer Support 00:07:32.517 ================================ 00:07:32.517 Supported: No 00:07:32.517 00:07:32.517 Persistent Memory Region Support 00:07:32.517 ================================ 00:07:32.517 Supported: No 00:07:32.517 00:07:32.517 Admin Command Set Attributes 00:07:32.517 ============================ 00:07:32.517 Security Send/Receive: Not Supported 00:07:32.517 Format NVM: Supported 00:07:32.517 Firmware Activate/Download: Not Supported 00:07:32.517 Namespace Management: Supported 00:07:32.517 Device Self-Test: Not Supported 00:07:32.517 Directives: Supported 00:07:32.517 NVMe-MI: Not Supported 00:07:32.517 Virtualization Management: Not Supported 00:07:32.517 Doorbell Buffer Config: Supported 00:07:32.517 Get LBA Status Capability: Not Supported 00:07:32.517 Command & Feature Lockdown Capability: Not Supported 00:07:32.517 Abort Command Limit: 4 00:07:32.517 Async Event Request Limit: 4 00:07:32.517 Number of Firmware Slots: N/A 00:07:32.517 Firmware Slot 1 Read-Only: N/A 00:07:32.517 Firmware Activation Without Reset: N/A 00:07:32.517 Multiple Update Detection Support: N/A 00:07:32.517 Firmware Update Granularity: No Information Provided 00:07:32.517 Per-Namespace SMART Log: Yes 00:07:32.517 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.517 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.517 Command Effects Log Page: Supported 00:07:32.517 Get Log Page Extended Data: Supported 00:07:32.517 Telemetry Log Pages: Not Supported 00:07:32.517 Persistent Event Log Pages: Not Supported 00:07:32.517 Supported Log Pages Log Page: May Support 00:07:32.517 Commands Supported & Effects Log Page: Not Supported 00:07:32.517 Feature Identifiers & Effects Log Page:May Support 00:07:32.517 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.517 Data Area 4 for Telemetry Log: Not Supported 00:07:32.517 Error Log Page Entries Supported: 1 00:07:32.517 Keep Alive: Not Supported 00:07:32.517 00:07:32.517 NVM Command Set Attributes 00:07:32.517 ========================== 00:07:32.517 Submission Queue Entry Size 00:07:32.517 Max: 64 00:07:32.517 Min: 64 00:07:32.517 Completion Queue Entry Size 00:07:32.517 Max: 16 00:07:32.517 Min: 16 00:07:32.517 Number of Namespaces: 256 00:07:32.517 Compare Command: Supported 00:07:32.517 Write Uncorrectable Command: Not Supported 00:07:32.517 Dataset Management Command: Supported 00:07:32.517 Write Zeroes Command: Supported 00:07:32.517 Set Features Save Field: Supported 00:07:32.517 Reservations: Not Supported 00:07:32.517 Timestamp: Supported 00:07:32.517 Copy: Supported 00:07:32.517 Volatile Write Cache: Present 00:07:32.517 Atomic Write Unit (Normal): 1 00:07:32.517 Atomic Write Unit (PFail): 1 00:07:32.517 Atomic Compare & Write Unit: 1 00:07:32.517 Fused Compare & Write: Not Supported 00:07:32.517 Scatter-Gather List 00:07:32.517 SGL Command Set: Supported 00:07:32.517 SGL Keyed: Not Supported 00:07:32.517 SGL Bit Bucket Descriptor: Not Supported 00:07:32.517 SGL Metadata Pointer: Not Supported 00:07:32.517 Oversized SGL: Not Supported 00:07:32.517 SGL Metadata Address: Not Supported 00:07:32.517 SGL Offset: Not Supported 00:07:32.517 Transport SGL Data Block: Not Supported 00:07:32.517 Replay Protected Memory Block: Not Supported 00:07:32.517 00:07:32.517 Firmware Slot Information 00:07:32.517 ========================= 00:07:32.517 Active slot: 1 00:07:32.517 Slot 1 Firmware Revision: 1.0 00:07:32.517 00:07:32.517 00:07:32.517 Commands Supported and Effects 00:07:32.517 ============================== 00:07:32.517 Admin Commands 00:07:32.517 -------------- 00:07:32.517 Delete I/O Submission Queue (00h): Supported 00:07:32.517 Create I/O Submission Queue (01h): Supported 00:07:32.517 Get Log Page (02h): Supported 00:07:32.517 Delete I/O Completion Queue (04h): Supported 00:07:32.517 Create I/O Completion Queue (05h): Supported 00:07:32.517 Identify (06h): Supported 00:07:32.517 Abort (08h): Supported 00:07:32.517 Set Features (09h): Supported 00:07:32.517 Get Features (0Ah): Supported 00:07:32.517 Asynchronous Event Request (0Ch): Supported 00:07:32.517 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.517 Directive Send (19h): Supported 00:07:32.517 Directive Receive (1Ah): Supported 00:07:32.517 Virtualization Management (1Ch): Supported 00:07:32.517 Doorbell Buffer Config (7Ch): Supported 00:07:32.517 Format NVM (80h): Supported LBA-Change 00:07:32.517 I/O Commands 00:07:32.517 ------------ 00:07:32.517 Flush (00h): Supported LBA-Change 00:07:32.517 Write (01h): Supported LBA-Change 00:07:32.517 Read (02h): Supported 00:07:32.517 Compare (05h): Supported 00:07:32.517 Write Zeroes (08h): Supported LBA-Change 00:07:32.517 Dataset Management (09h): Supported LBA-Change 00:07:32.517 Unknown (0Ch): Supported 00:07:32.517 Unknown (12h): Supported 00:07:32.517 Copy (19h): Supported LBA-Change 00:07:32.517 Unknown (1Dh): Supported LBA-Change 00:07:32.517 00:07:32.517 Error Log 00:07:32.517 ========= 00:07:32.517 00:07:32.517 Arbitration 00:07:32.517 =========== 00:07:32.517 Arbitration Burst: no limit 00:07:32.517 00:07:32.517 Power Management 00:07:32.517 ================ 00:07:32.517 Number of Power States: 1 00:07:32.517 Current Power State: Power State #0 00:07:32.517 Power State #0: 00:07:32.517 Max Power: 25.00 W 00:07:32.517 Non-Operational State: Operational 00:07:32.517 Entry Latency: 16 microseconds 00:07:32.517 Exit Latency: 4 microseconds 00:07:32.517 Relative Read Throughput: 0 00:07:32.517 Relative Read Latency: 0 00:07:32.517 Relative Write Throughput: 0 00:07:32.517 Relative Write Latency: 0 00:07:32.518 Idle Power: Not Reported 00:07:32.518 Active Power: Not Reported 00:07:32.518 Non-Operational Permissive Mode: Not Supported 00:07:32.518 00:07:32.518 Health Information 00:07:32.518 ================== 00:07:32.518 Critical Warnings: 00:07:32.518 Available Spare Space: OK 00:07:32.518 Temperature: OK 00:07:32.518 Device Reliability: OK 00:07:32.518 Read Only: No 00:07:32.518 Volatile Memory Backup: OK 00:07:32.518 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.518 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.518 Available Spare: 0% 00:07:32.518 Available Spare Threshold: 0% 00:07:32.518 Life Percentage Used: 0% 00:07:32.518 Data Units Read: 665 00:07:32.518 Data Units Written: 593 00:07:32.518 Host Read Commands: 36531 00:07:32.518 Host Write Commands: 36317 00:07:32.518 Controller Busy Time: 0 minutes 00:07:32.518 Power Cycles: 0 00:07:32.518 Power On Hours: 0 hours 00:07:32.518 Unsafe Shutdowns: 0 00:07:32.518 Unrecoverable Media Errors: 0 00:07:32.518 Lifetime Error Log Entries: 0 00:07:32.518 Warning Temperature Time: 0 minutes 00:07:32.518 Critical Temperature Time: 0 minutes 00:07:32.518 00:07:32.518 Number of Queues 00:07:32.518 ================ 00:07:32.518 Number of I/O Submission Queues: 64 00:07:32.518 Number of I/O Completion Queues: 64 00:07:32.518 00:07:32.518 ZNS Specific Controller Data 00:07:32.518 ============================ 00:07:32.518 Zone Append Size Limit: 0 00:07:32.518 00:07:32.518 00:07:32.518 Active Namespaces 00:07:32.518 ================= 00:07:32.518 Namespace ID:1 00:07:32.518 Error Recovery Timeout: Unlimited 00:07:32.518 Command Set Identifier: NVM (00h) 00:07:32.518 Deallocate: Supported 00:07:32.518 Deallocated/Unwritten Error: Supported 00:07:32.518 Deallocated Read Value: All 0x00 00:07:32.518 Deallocate in Write Zeroes: Not Supported 00:07:32.518 Deallocated Guard Field: 0xFFFF 00:07:32.518 Flush: Supported 00:07:32.518 Reservation: Not Supported 00:07:32.518 Metadata Transferred as: Separate Metadata Buffer 00:07:32.518 Namespace Sharing Capabilities: Private 00:07:32.518 Size (in LBAs): 1548666 (5GiB) 00:07:32.518 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.518 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.518 Thin Provisioning: Not Supported 00:07:32.518 Per-NS Atomic Units: No 00:07:32.518 Maximum Single Source Range Length: 128 00:07:32.518 Maximum Copy Length: 128 00:07:32.518 Maximum Source Range Count: 128 00:07:32.518 NGUID/EUI64 Never Reused: No 00:07:32.518 Namespace Write Protected: No 00:07:32.518 Number of LBA Formats: 8 00:07:32.518 Current LBA Format: LBA Format #07 00:07:32.518 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.518 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.518 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.518 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.518 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.518 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.518 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.518 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.518 00:07:32.518 NVM Specific Namespace Data 00:07:32.518 =========================== 00:07:32.518 Logical Block Storage Tag Mask: 0 00:07:32.518 Protection Information Capabilities: 00:07:32.518 16b Guard Protection Information Storage Tag Support: No 00:07:32.518 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.518 Storage Tag Check Read Support: No 00:07:32.518 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.518 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.518 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:32.777 ===================================================== 00:07:32.777 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.777 ===================================================== 00:07:32.777 Controller Capabilities/Features 00:07:32.777 ================================ 00:07:32.777 Vendor ID: 1b36 00:07:32.777 Subsystem Vendor ID: 1af4 00:07:32.777 Serial Number: 12341 00:07:32.777 Model Number: QEMU NVMe Ctrl 00:07:32.777 Firmware Version: 8.0.0 00:07:32.777 Recommended Arb Burst: 6 00:07:32.777 IEEE OUI Identifier: 00 54 52 00:07:32.777 Multi-path I/O 00:07:32.777 May have multiple subsystem ports: No 00:07:32.777 May have multiple controllers: No 00:07:32.777 Associated with SR-IOV VF: No 00:07:32.777 Max Data Transfer Size: 524288 00:07:32.777 Max Number of Namespaces: 256 00:07:32.777 Max Number of I/O Queues: 64 00:07:32.777 NVMe Specification Version (VS): 1.4 00:07:32.777 NVMe Specification Version (Identify): 1.4 00:07:32.777 Maximum Queue Entries: 2048 00:07:32.777 Contiguous Queues Required: Yes 00:07:32.777 Arbitration Mechanisms Supported 00:07:32.777 Weighted Round Robin: Not Supported 00:07:32.777 Vendor Specific: Not Supported 00:07:32.777 Reset Timeout: 7500 ms 00:07:32.777 Doorbell Stride: 4 bytes 00:07:32.777 NVM Subsystem Reset: Not Supported 00:07:32.777 Command Sets Supported 00:07:32.777 NVM Command Set: Supported 00:07:32.777 Boot Partition: Not Supported 00:07:32.777 Memory Page Size Minimum: 4096 bytes 00:07:32.777 Memory Page Size Maximum: 65536 bytes 00:07:32.777 Persistent Memory Region: Not Supported 00:07:32.777 Optional Asynchronous Events Supported 00:07:32.777 Namespace Attribute Notices: Supported 00:07:32.777 Firmware Activation Notices: Not Supported 00:07:32.777 ANA Change Notices: Not Supported 00:07:32.777 PLE Aggregate Log Change Notices: Not Supported 00:07:32.777 LBA Status Info Alert Notices: Not Supported 00:07:32.777 EGE Aggregate Log Change Notices: Not Supported 00:07:32.777 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.777 Zone Descriptor Change Notices: Not Supported 00:07:32.777 Discovery Log Change Notices: Not Supported 00:07:32.777 Controller Attributes 00:07:32.777 128-bit Host Identifier: Not Supported 00:07:32.777 Non-Operational Permissive Mode: Not Supported 00:07:32.777 NVM Sets: Not Supported 00:07:32.777 Read Recovery Levels: Not Supported 00:07:32.777 Endurance Groups: Not Supported 00:07:32.777 Predictable Latency Mode: Not Supported 00:07:32.777 Traffic Based Keep ALive: Not Supported 00:07:32.777 Namespace Granularity: Not Supported 00:07:32.777 SQ Associations: Not Supported 00:07:32.777 UUID List: Not Supported 00:07:32.777 Multi-Domain Subsystem: Not Supported 00:07:32.777 Fixed Capacity Management: Not Supported 00:07:32.777 Variable Capacity Management: Not Supported 00:07:32.777 Delete Endurance Group: Not Supported 00:07:32.777 Delete NVM Set: Not Supported 00:07:32.777 Extended LBA Formats Supported: Supported 00:07:32.777 Flexible Data Placement Supported: Not Supported 00:07:32.777 00:07:32.777 Controller Memory Buffer Support 00:07:32.777 ================================ 00:07:32.777 Supported: No 00:07:32.777 00:07:32.777 Persistent Memory Region Support 00:07:32.777 ================================ 00:07:32.777 Supported: No 00:07:32.777 00:07:32.777 Admin Command Set Attributes 00:07:32.777 ============================ 00:07:32.777 Security Send/Receive: Not Supported 00:07:32.777 Format NVM: Supported 00:07:32.777 Firmware Activate/Download: Not Supported 00:07:32.777 Namespace Management: Supported 00:07:32.777 Device Self-Test: Not Supported 00:07:32.777 Directives: Supported 00:07:32.777 NVMe-MI: Not Supported 00:07:32.777 Virtualization Management: Not Supported 00:07:32.777 Doorbell Buffer Config: Supported 00:07:32.777 Get LBA Status Capability: Not Supported 00:07:32.777 Command & Feature Lockdown Capability: Not Supported 00:07:32.777 Abort Command Limit: 4 00:07:32.777 Async Event Request Limit: 4 00:07:32.777 Number of Firmware Slots: N/A 00:07:32.777 Firmware Slot 1 Read-Only: N/A 00:07:32.777 Firmware Activation Without Reset: N/A 00:07:32.778 Multiple Update Detection Support: N/A 00:07:32.778 Firmware Update Granularity: No Information Provided 00:07:32.778 Per-Namespace SMART Log: Yes 00:07:32.778 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.778 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.778 Command Effects Log Page: Supported 00:07:32.778 Get Log Page Extended Data: Supported 00:07:32.778 Telemetry Log Pages: Not Supported 00:07:32.778 Persistent Event Log Pages: Not Supported 00:07:32.778 Supported Log Pages Log Page: May Support 00:07:32.778 Commands Supported & Effects Log Page: Not Supported 00:07:32.778 Feature Identifiers & Effects Log Page:May Support 00:07:32.778 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.778 Data Area 4 for Telemetry Log: Not Supported 00:07:32.778 Error Log Page Entries Supported: 1 00:07:32.778 Keep Alive: Not Supported 00:07:32.778 00:07:32.778 NVM Command Set Attributes 00:07:32.778 ========================== 00:07:32.778 Submission Queue Entry Size 00:07:32.778 Max: 64 00:07:32.778 Min: 64 00:07:32.778 Completion Queue Entry Size 00:07:32.778 Max: 16 00:07:32.778 Min: 16 00:07:32.778 Number of Namespaces: 256 00:07:32.778 Compare Command: Supported 00:07:32.778 Write Uncorrectable Command: Not Supported 00:07:32.778 Dataset Management Command: Supported 00:07:32.778 Write Zeroes Command: Supported 00:07:32.778 Set Features Save Field: Supported 00:07:32.778 Reservations: Not Supported 00:07:32.778 Timestamp: Supported 00:07:32.778 Copy: Supported 00:07:32.778 Volatile Write Cache: Present 00:07:32.778 Atomic Write Unit (Normal): 1 00:07:32.778 Atomic Write Unit (PFail): 1 00:07:32.778 Atomic Compare & Write Unit: 1 00:07:32.778 Fused Compare & Write: Not Supported 00:07:32.778 Scatter-Gather List 00:07:32.778 SGL Command Set: Supported 00:07:32.778 SGL Keyed: Not Supported 00:07:32.778 SGL Bit Bucket Descriptor: Not Supported 00:07:32.778 SGL Metadata Pointer: Not Supported 00:07:32.778 Oversized SGL: Not Supported 00:07:32.778 SGL Metadata Address: Not Supported 00:07:32.778 SGL Offset: Not Supported 00:07:32.778 Transport SGL Data Block: Not Supported 00:07:32.778 Replay Protected Memory Block: Not Supported 00:07:32.778 00:07:32.778 Firmware Slot Information 00:07:32.778 ========================= 00:07:32.778 Active slot: 1 00:07:32.778 Slot 1 Firmware Revision: 1.0 00:07:32.778 00:07:32.778 00:07:32.778 Commands Supported and Effects 00:07:32.778 ============================== 00:07:32.778 Admin Commands 00:07:32.778 -------------- 00:07:32.778 Delete I/O Submission Queue (00h): Supported 00:07:32.778 Create I/O Submission Queue (01h): Supported 00:07:32.778 Get Log Page (02h): Supported 00:07:32.778 Delete I/O Completion Queue (04h): Supported 00:07:32.778 Create I/O Completion Queue (05h): Supported 00:07:32.778 Identify (06h): Supported 00:07:32.778 Abort (08h): Supported 00:07:32.778 Set Features (09h): Supported 00:07:32.778 Get Features (0Ah): Supported 00:07:32.778 Asynchronous Event Request (0Ch): Supported 00:07:32.778 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.778 Directive Send (19h): Supported 00:07:32.778 Directive Receive (1Ah): Supported 00:07:32.778 Virtualization Management (1Ch): Supported 00:07:32.778 Doorbell Buffer Config (7Ch): Supported 00:07:32.778 Format NVM (80h): Supported LBA-Change 00:07:32.778 I/O Commands 00:07:32.778 ------------ 00:07:32.778 Flush (00h): Supported LBA-Change 00:07:32.778 Write (01h): Supported LBA-Change 00:07:32.778 Read (02h): Supported 00:07:32.778 Compare (05h): Supported 00:07:32.778 Write Zeroes (08h): Supported LBA-Change 00:07:32.778 Dataset Management (09h): Supported LBA-Change 00:07:32.778 Unknown (0Ch): Supported 00:07:32.778 Unknown (12h): Supported 00:07:32.778 Copy (19h): Supported LBA-Change 00:07:32.778 Unknown (1Dh): Supported LBA-Change 00:07:32.778 00:07:32.778 Error Log 00:07:32.778 ========= 00:07:32.778 00:07:32.778 Arbitration 00:07:32.778 =========== 00:07:32.778 Arbitration Burst: no limit 00:07:32.778 00:07:32.778 Power Management 00:07:32.778 ================ 00:07:32.778 Number of Power States: 1 00:07:32.778 Current Power State: Power State #0 00:07:32.778 Power State #0: 00:07:32.778 Max Power: 25.00 W 00:07:32.778 Non-Operational State: Operational 00:07:32.778 Entry Latency: 16 microseconds 00:07:32.778 Exit Latency: 4 microseconds 00:07:32.778 Relative Read Throughput: 0 00:07:32.778 Relative Read Latency: 0 00:07:32.778 Relative Write Throughput: 0 00:07:32.778 Relative Write Latency: 0 00:07:32.778 Idle Power: Not Reported 00:07:32.778 Active Power: Not Reported 00:07:32.778 Non-Operational Permissive Mode: Not Supported 00:07:32.778 00:07:32.778 Health Information 00:07:32.778 ================== 00:07:32.778 Critical Warnings: 00:07:32.778 Available Spare Space: OK 00:07:32.778 Temperature: OK 00:07:32.778 Device Reliability: OK 00:07:32.778 Read Only: No 00:07:32.778 Volatile Memory Backup: OK 00:07:32.778 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.778 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.778 Available Spare: 0% 00:07:32.778 Available Spare Threshold: 0% 00:07:32.778 Life Percentage Used: 0% 00:07:32.778 Data Units Read: 1048 00:07:32.778 Data Units Written: 915 00:07:32.778 Host Read Commands: 54981 00:07:32.778 Host Write Commands: 53778 00:07:32.778 Controller Busy Time: 0 minutes 00:07:32.778 Power Cycles: 0 00:07:32.778 Power On Hours: 0 hours 00:07:32.778 Unsafe Shutdowns: 0 00:07:32.778 Unrecoverable Media Errors: 0 00:07:32.778 Lifetime Error Log Entries: 0 00:07:32.778 Warning Temperature Time: 0 minutes 00:07:32.778 Critical Temperature Time: 0 minutes 00:07:32.778 00:07:32.778 Number of Queues 00:07:32.778 ================ 00:07:32.778 Number of I/O Submission Queues: 64 00:07:32.778 Number of I/O Completion Queues: 64 00:07:32.778 00:07:32.778 ZNS Specific Controller Data 00:07:32.778 ============================ 00:07:32.778 Zone Append Size Limit: 0 00:07:32.778 00:07:32.778 00:07:32.778 Active Namespaces 00:07:32.778 ================= 00:07:32.778 Namespace ID:1 00:07:32.778 Error Recovery Timeout: Unlimited 00:07:32.778 Command Set Identifier: NVM (00h) 00:07:32.778 Deallocate: Supported 00:07:32.778 Deallocated/Unwritten Error: Supported 00:07:32.778 Deallocated Read Value: All 0x00 00:07:32.778 Deallocate in Write Zeroes: Not Supported 00:07:32.778 Deallocated Guard Field: 0xFFFF 00:07:32.778 Flush: Supported 00:07:32.778 Reservation: Not Supported 00:07:32.778 Namespace Sharing Capabilities: Private 00:07:32.778 Size (in LBAs): 1310720 (5GiB) 00:07:32.778 Capacity (in LBAs): 1310720 (5GiB) 00:07:32.778 Utilization (in LBAs): 1310720 (5GiB) 00:07:32.778 Thin Provisioning: Not Supported 00:07:32.778 Per-NS Atomic Units: No 00:07:32.778 Maximum Single Source Range Length: 128 00:07:32.778 Maximum Copy Length: 128 00:07:32.778 Maximum Source Range Count: 128 00:07:32.778 NGUID/EUI64 Never Reused: No 00:07:32.778 Namespace Write Protected: No 00:07:32.778 Number of LBA Formats: 8 00:07:32.778 Current LBA Format: LBA Format #04 00:07:32.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.778 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.778 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.778 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.778 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.778 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.778 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.778 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.778 00:07:32.778 NVM Specific Namespace Data 00:07:32.778 =========================== 00:07:32.778 Logical Block Storage Tag Mask: 0 00:07:32.778 Protection Information Capabilities: 00:07:32.778 16b Guard Protection Information Storage Tag Support: No 00:07:32.778 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.778 Storage Tag Check Read Support: No 00:07:32.778 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.778 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.778 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:33.040 ===================================================== 00:07:33.040 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:33.040 ===================================================== 00:07:33.040 Controller Capabilities/Features 00:07:33.040 ================================ 00:07:33.040 Vendor ID: 1b36 00:07:33.040 Subsystem Vendor ID: 1af4 00:07:33.040 Serial Number: 12342 00:07:33.040 Model Number: QEMU NVMe Ctrl 00:07:33.040 Firmware Version: 8.0.0 00:07:33.040 Recommended Arb Burst: 6 00:07:33.040 IEEE OUI Identifier: 00 54 52 00:07:33.040 Multi-path I/O 00:07:33.040 May have multiple subsystem ports: No 00:07:33.040 May have multiple controllers: No 00:07:33.040 Associated with SR-IOV VF: No 00:07:33.040 Max Data Transfer Size: 524288 00:07:33.040 Max Number of Namespaces: 256 00:07:33.040 Max Number of I/O Queues: 64 00:07:33.040 NVMe Specification Version (VS): 1.4 00:07:33.040 NVMe Specification Version (Identify): 1.4 00:07:33.040 Maximum Queue Entries: 2048 00:07:33.040 Contiguous Queues Required: Yes 00:07:33.040 Arbitration Mechanisms Supported 00:07:33.040 Weighted Round Robin: Not Supported 00:07:33.040 Vendor Specific: Not Supported 00:07:33.040 Reset Timeout: 7500 ms 00:07:33.040 Doorbell Stride: 4 bytes 00:07:33.040 NVM Subsystem Reset: Not Supported 00:07:33.040 Command Sets Supported 00:07:33.040 NVM Command Set: Supported 00:07:33.040 Boot Partition: Not Supported 00:07:33.040 Memory Page Size Minimum: 4096 bytes 00:07:33.040 Memory Page Size Maximum: 65536 bytes 00:07:33.040 Persistent Memory Region: Not Supported 00:07:33.040 Optional Asynchronous Events Supported 00:07:33.040 Namespace Attribute Notices: Supported 00:07:33.040 Firmware Activation Notices: Not Supported 00:07:33.040 ANA Change Notices: Not Supported 00:07:33.040 PLE Aggregate Log Change Notices: Not Supported 00:07:33.040 LBA Status Info Alert Notices: Not Supported 00:07:33.040 EGE Aggregate Log Change Notices: Not Supported 00:07:33.040 Normal NVM Subsystem Shutdown event: Not Supported 00:07:33.040 Zone Descriptor Change Notices: Not Supported 00:07:33.040 Discovery Log Change Notices: Not Supported 00:07:33.040 Controller Attributes 00:07:33.040 128-bit Host Identifier: Not Supported 00:07:33.040 Non-Operational Permissive Mode: Not Supported 00:07:33.040 NVM Sets: Not Supported 00:07:33.040 Read Recovery Levels: Not Supported 00:07:33.040 Endurance Groups: Not Supported 00:07:33.040 Predictable Latency Mode: Not Supported 00:07:33.040 Traffic Based Keep ALive: Not Supported 00:07:33.040 Namespace Granularity: Not Supported 00:07:33.040 SQ Associations: Not Supported 00:07:33.040 UUID List: Not Supported 00:07:33.040 Multi-Domain Subsystem: Not Supported 00:07:33.040 Fixed Capacity Management: Not Supported 00:07:33.040 Variable Capacity Management: Not Supported 00:07:33.040 Delete Endurance Group: Not Supported 00:07:33.040 Delete NVM Set: Not Supported 00:07:33.040 Extended LBA Formats Supported: Supported 00:07:33.040 Flexible Data Placement Supported: Not Supported 00:07:33.040 00:07:33.040 Controller Memory Buffer Support 00:07:33.040 ================================ 00:07:33.040 Supported: No 00:07:33.040 00:07:33.040 Persistent Memory Region Support 00:07:33.040 ================================ 00:07:33.040 Supported: No 00:07:33.040 00:07:33.040 Admin Command Set Attributes 00:07:33.040 ============================ 00:07:33.040 Security Send/Receive: Not Supported 00:07:33.040 Format NVM: Supported 00:07:33.040 Firmware Activate/Download: Not Supported 00:07:33.040 Namespace Management: Supported 00:07:33.040 Device Self-Test: Not Supported 00:07:33.040 Directives: Supported 00:07:33.040 NVMe-MI: Not Supported 00:07:33.040 Virtualization Management: Not Supported 00:07:33.040 Doorbell Buffer Config: Supported 00:07:33.040 Get LBA Status Capability: Not Supported 00:07:33.040 Command & Feature Lockdown Capability: Not Supported 00:07:33.040 Abort Command Limit: 4 00:07:33.040 Async Event Request Limit: 4 00:07:33.040 Number of Firmware Slots: N/A 00:07:33.040 Firmware Slot 1 Read-Only: N/A 00:07:33.040 Firmware Activation Without Reset: N/A 00:07:33.040 Multiple Update Detection Support: N/A 00:07:33.040 Firmware Update Granularity: No Information Provided 00:07:33.040 Per-Namespace SMART Log: Yes 00:07:33.040 Asymmetric Namespace Access Log Page: Not Supported 00:07:33.040 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:33.040 Command Effects Log Page: Supported 00:07:33.040 Get Log Page Extended Data: Supported 00:07:33.040 Telemetry Log Pages: Not Supported 00:07:33.040 Persistent Event Log Pages: Not Supported 00:07:33.040 Supported Log Pages Log Page: May Support 00:07:33.040 Commands Supported & Effects Log Page: Not Supported 00:07:33.040 Feature Identifiers & Effects Log Page:May Support 00:07:33.040 NVMe-MI Commands & Effects Log Page: May Support 00:07:33.040 Data Area 4 for Telemetry Log: Not Supported 00:07:33.040 Error Log Page Entries Supported: 1 00:07:33.040 Keep Alive: Not Supported 00:07:33.040 00:07:33.040 NVM Command Set Attributes 00:07:33.040 ========================== 00:07:33.040 Submission Queue Entry Size 00:07:33.040 Max: 64 00:07:33.040 Min: 64 00:07:33.040 Completion Queue Entry Size 00:07:33.040 Max: 16 00:07:33.040 Min: 16 00:07:33.040 Number of Namespaces: 256 00:07:33.040 Compare Command: Supported 00:07:33.040 Write Uncorrectable Command: Not Supported 00:07:33.040 Dataset Management Command: Supported 00:07:33.040 Write Zeroes Command: Supported 00:07:33.040 Set Features Save Field: Supported 00:07:33.040 Reservations: Not Supported 00:07:33.040 Timestamp: Supported 00:07:33.040 Copy: Supported 00:07:33.040 Volatile Write Cache: Present 00:07:33.040 Atomic Write Unit (Normal): 1 00:07:33.040 Atomic Write Unit (PFail): 1 00:07:33.040 Atomic Compare & Write Unit: 1 00:07:33.040 Fused Compare & Write: Not Supported 00:07:33.040 Scatter-Gather List 00:07:33.040 SGL Command Set: Supported 00:07:33.040 SGL Keyed: Not Supported 00:07:33.040 SGL Bit Bucket Descriptor: Not Supported 00:07:33.040 SGL Metadata Pointer: Not Supported 00:07:33.040 Oversized SGL: Not Supported 00:07:33.040 SGL Metadata Address: Not Supported 00:07:33.040 SGL Offset: Not Supported 00:07:33.040 Transport SGL Data Block: Not Supported 00:07:33.040 Replay Protected Memory Block: Not Supported 00:07:33.040 00:07:33.040 Firmware Slot Information 00:07:33.040 ========================= 00:07:33.040 Active slot: 1 00:07:33.040 Slot 1 Firmware Revision: 1.0 00:07:33.040 00:07:33.040 00:07:33.040 Commands Supported and Effects 00:07:33.040 ============================== 00:07:33.040 Admin Commands 00:07:33.040 -------------- 00:07:33.040 Delete I/O Submission Queue (00h): Supported 00:07:33.040 Create I/O Submission Queue (01h): Supported 00:07:33.040 Get Log Page (02h): Supported 00:07:33.040 Delete I/O Completion Queue (04h): Supported 00:07:33.040 Create I/O Completion Queue (05h): Supported 00:07:33.041 Identify (06h): Supported 00:07:33.041 Abort (08h): Supported 00:07:33.041 Set Features (09h): Supported 00:07:33.041 Get Features (0Ah): Supported 00:07:33.041 Asynchronous Event Request (0Ch): Supported 00:07:33.041 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:33.041 Directive Send (19h): Supported 00:07:33.041 Directive Receive (1Ah): Supported 00:07:33.041 Virtualization Management (1Ch): Supported 00:07:33.041 Doorbell Buffer Config (7Ch): Supported 00:07:33.041 Format NVM (80h): Supported LBA-Change 00:07:33.041 I/O Commands 00:07:33.041 ------------ 00:07:33.041 Flush (00h): Supported LBA-Change 00:07:33.041 Write (01h): Supported LBA-Change 00:07:33.041 Read (02h): Supported 00:07:33.041 Compare (05h): Supported 00:07:33.041 Write Zeroes (08h): Supported LBA-Change 00:07:33.041 Dataset Management (09h): Supported LBA-Change 00:07:33.041 Unknown (0Ch): Supported 00:07:33.041 Unknown (12h): Supported 00:07:33.041 Copy (19h): Supported LBA-Change 00:07:33.041 Unknown (1Dh): Supported LBA-Change 00:07:33.041 00:07:33.041 Error Log 00:07:33.041 ========= 00:07:33.041 00:07:33.041 Arbitration 00:07:33.041 =========== 00:07:33.041 Arbitration Burst: no limit 00:07:33.041 00:07:33.041 Power Management 00:07:33.041 ================ 00:07:33.041 Number of Power States: 1 00:07:33.041 Current Power State: Power State #0 00:07:33.041 Power State #0: 00:07:33.041 Max Power: 25.00 W 00:07:33.041 Non-Operational State: Operational 00:07:33.041 Entry Latency: 16 microseconds 00:07:33.041 Exit Latency: 4 microseconds 00:07:33.041 Relative Read Throughput: 0 00:07:33.041 Relative Read Latency: 0 00:07:33.041 Relative Write Throughput: 0 00:07:33.041 Relative Write Latency: 0 00:07:33.041 Idle Power: Not Reported 00:07:33.041 Active Power: Not Reported 00:07:33.041 Non-Operational Permissive Mode: Not Supported 00:07:33.041 00:07:33.041 Health Information 00:07:33.041 ================== 00:07:33.041 Critical Warnings: 00:07:33.041 Available Spare Space: OK 00:07:33.041 Temperature: OK 00:07:33.041 Device Reliability: OK 00:07:33.041 Read Only: No 00:07:33.041 Volatile Memory Backup: OK 00:07:33.041 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.041 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.041 Available Spare: 0% 00:07:33.041 Available Spare Threshold: 0% 00:07:33.041 Life Percentage Used: 0% 00:07:33.041 Data Units Read: 2228 00:07:33.041 Data Units Written: 2015 00:07:33.041 Host Read Commands: 112503 00:07:33.041 Host Write Commands: 110772 00:07:33.041 Controller Busy Time: 0 minutes 00:07:33.041 Power Cycles: 0 00:07:33.041 Power On Hours: 0 hours 00:07:33.041 Unsafe Shutdowns: 0 00:07:33.041 Unrecoverable Media Errors: 0 00:07:33.041 Lifetime Error Log Entries: 0 00:07:33.041 Warning Temperature Time: 0 minutes 00:07:33.041 Critical Temperature Time: 0 minutes 00:07:33.041 00:07:33.041 Number of Queues 00:07:33.041 ================ 00:07:33.041 Number of I/O Submission Queues: 64 00:07:33.041 Number of I/O Completion Queues: 64 00:07:33.041 00:07:33.041 ZNS Specific Controller Data 00:07:33.041 ============================ 00:07:33.041 Zone Append Size Limit: 0 00:07:33.041 00:07:33.041 00:07:33.041 Active Namespaces 00:07:33.041 ================= 00:07:33.041 Namespace ID:1 00:07:33.041 Error Recovery Timeout: Unlimited 00:07:33.041 Command Set Identifier: NVM (00h) 00:07:33.041 Deallocate: Supported 00:07:33.041 Deallocated/Unwritten Error: Supported 00:07:33.041 Deallocated Read Value: All 0x00 00:07:33.041 Deallocate in Write Zeroes: Not Supported 00:07:33.041 Deallocated Guard Field: 0xFFFF 00:07:33.041 Flush: Supported 00:07:33.041 Reservation: Not Supported 00:07:33.041 Namespace Sharing Capabilities: Private 00:07:33.041 Size (in LBAs): 1048576 (4GiB) 00:07:33.041 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.041 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.041 Thin Provisioning: Not Supported 00:07:33.041 Per-NS Atomic Units: No 00:07:33.041 Maximum Single Source Range Length: 128 00:07:33.041 Maximum Copy Length: 128 00:07:33.041 Maximum Source Range Count: 128 00:07:33.041 NGUID/EUI64 Never Reused: No 00:07:33.041 Namespace Write Protected: No 00:07:33.041 Number of LBA Formats: 8 00:07:33.041 Current LBA Format: LBA Format #04 00:07:33.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.041 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.041 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.041 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.041 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.041 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.041 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.041 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.041 00:07:33.041 NVM Specific Namespace Data 00:07:33.041 =========================== 00:07:33.041 Logical Block Storage Tag Mask: 0 00:07:33.041 Protection Information Capabilities: 00:07:33.041 16b Guard Protection Information Storage Tag Support: No 00:07:33.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.041 Storage Tag Check Read Support: No 00:07:33.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Namespace ID:2 00:07:33.041 Error Recovery Timeout: Unlimited 00:07:33.041 Command Set Identifier: NVM (00h) 00:07:33.041 Deallocate: Supported 00:07:33.041 Deallocated/Unwritten Error: Supported 00:07:33.041 Deallocated Read Value: All 0x00 00:07:33.041 Deallocate in Write Zeroes: Not Supported 00:07:33.041 Deallocated Guard Field: 0xFFFF 00:07:33.041 Flush: Supported 00:07:33.041 Reservation: Not Supported 00:07:33.041 Namespace Sharing Capabilities: Private 00:07:33.041 Size (in LBAs): 1048576 (4GiB) 00:07:33.041 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.041 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.041 Thin Provisioning: Not Supported 00:07:33.041 Per-NS Atomic Units: No 00:07:33.041 Maximum Single Source Range Length: 128 00:07:33.041 Maximum Copy Length: 128 00:07:33.041 Maximum Source Range Count: 128 00:07:33.041 NGUID/EUI64 Never Reused: No 00:07:33.041 Namespace Write Protected: No 00:07:33.041 Number of LBA Formats: 8 00:07:33.041 Current LBA Format: LBA Format #04 00:07:33.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.041 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.041 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.041 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.041 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.041 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.041 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.041 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.041 00:07:33.041 NVM Specific Namespace Data 00:07:33.041 =========================== 00:07:33.041 Logical Block Storage Tag Mask: 0 00:07:33.041 Protection Information Capabilities: 00:07:33.041 16b Guard Protection Information Storage Tag Support: No 00:07:33.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.041 Storage Tag Check Read Support: No 00:07:33.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.041 Namespace ID:3 00:07:33.041 Error Recovery Timeout: Unlimited 00:07:33.041 Command Set Identifier: NVM (00h) 00:07:33.041 Deallocate: Supported 00:07:33.041 Deallocated/Unwritten Error: Supported 00:07:33.041 Deallocated Read Value: All 0x00 00:07:33.041 Deallocate in Write Zeroes: Not Supported 00:07:33.041 Deallocated Guard Field: 0xFFFF 00:07:33.041 Flush: Supported 00:07:33.041 Reservation: Not Supported 00:07:33.041 Namespace Sharing Capabilities: Private 00:07:33.041 Size (in LBAs): 1048576 (4GiB) 00:07:33.042 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.042 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.042 Thin Provisioning: Not Supported 00:07:33.042 Per-NS Atomic Units: No 00:07:33.042 Maximum Single Source Range Length: 128 00:07:33.042 Maximum Copy Length: 128 00:07:33.042 Maximum Source Range Count: 128 00:07:33.042 NGUID/EUI64 Never Reused: No 00:07:33.042 Namespace Write Protected: No 00:07:33.042 Number of LBA Formats: 8 00:07:33.042 Current LBA Format: LBA Format #04 00:07:33.042 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.042 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.042 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.042 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.042 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.042 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.042 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.042 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.042 00:07:33.042 NVM Specific Namespace Data 00:07:33.042 =========================== 00:07:33.042 Logical Block Storage Tag Mask: 0 00:07:33.042 Protection Information Capabilities: 00:07:33.042 16b Guard Protection Information Storage Tag Support: No 00:07:33.042 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.042 Storage Tag Check Read Support: No 00:07:33.042 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.042 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:33.042 20:34:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:33.042 ===================================================== 00:07:33.042 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:33.042 ===================================================== 00:07:33.042 Controller Capabilities/Features 00:07:33.042 ================================ 00:07:33.042 Vendor ID: 1b36 00:07:33.042 Subsystem Vendor ID: 1af4 00:07:33.042 Serial Number: 12343 00:07:33.042 Model Number: QEMU NVMe Ctrl 00:07:33.042 Firmware Version: 8.0.0 00:07:33.042 Recommended Arb Burst: 6 00:07:33.042 IEEE OUI Identifier: 00 54 52 00:07:33.042 Multi-path I/O 00:07:33.042 May have multiple subsystem ports: No 00:07:33.042 May have multiple controllers: Yes 00:07:33.042 Associated with SR-IOV VF: No 00:07:33.042 Max Data Transfer Size: 524288 00:07:33.042 Max Number of Namespaces: 256 00:07:33.042 Max Number of I/O Queues: 64 00:07:33.042 NVMe Specification Version (VS): 1.4 00:07:33.042 NVMe Specification Version (Identify): 1.4 00:07:33.042 Maximum Queue Entries: 2048 00:07:33.042 Contiguous Queues Required: Yes 00:07:33.042 Arbitration Mechanisms Supported 00:07:33.042 Weighted Round Robin: Not Supported 00:07:33.042 Vendor Specific: Not Supported 00:07:33.042 Reset Timeout: 7500 ms 00:07:33.042 Doorbell Stride: 4 bytes 00:07:33.042 NVM Subsystem Reset: Not Supported 00:07:33.042 Command Sets Supported 00:07:33.042 NVM Command Set: Supported 00:07:33.042 Boot Partition: Not Supported 00:07:33.042 Memory Page Size Minimum: 4096 bytes 00:07:33.042 Memory Page Size Maximum: 65536 bytes 00:07:33.042 Persistent Memory Region: Not Supported 00:07:33.042 Optional Asynchronous Events Supported 00:07:33.042 Namespace Attribute Notices: Supported 00:07:33.042 Firmware Activation Notices: Not Supported 00:07:33.042 ANA Change Notices: Not Supported 00:07:33.042 PLE Aggregate Log Change Notices: Not Supported 00:07:33.042 LBA Status Info Alert Notices: Not Supported 00:07:33.042 EGE Aggregate Log Change Notices: Not Supported 00:07:33.042 Normal NVM Subsystem Shutdown event: Not Supported 00:07:33.042 Zone Descriptor Change Notices: Not Supported 00:07:33.042 Discovery Log Change Notices: Not Supported 00:07:33.042 Controller Attributes 00:07:33.042 128-bit Host Identifier: Not Supported 00:07:33.042 Non-Operational Permissive Mode: Not Supported 00:07:33.042 NVM Sets: Not Supported 00:07:33.042 Read Recovery Levels: Not Supported 00:07:33.042 Endurance Groups: Supported 00:07:33.042 Predictable Latency Mode: Not Supported 00:07:33.042 Traffic Based Keep ALive: Not Supported 00:07:33.042 Namespace Granularity: Not Supported 00:07:33.042 SQ Associations: Not Supported 00:07:33.042 UUID List: Not Supported 00:07:33.042 Multi-Domain Subsystem: Not Supported 00:07:33.042 Fixed Capacity Management: Not Supported 00:07:33.042 Variable Capacity Management: Not Supported 00:07:33.042 Delete Endurance Group: Not Supported 00:07:33.042 Delete NVM Set: Not Supported 00:07:33.042 Extended LBA Formats Supported: Supported 00:07:33.042 Flexible Data Placement Supported: Supported 00:07:33.042 00:07:33.042 Controller Memory Buffer Support 00:07:33.042 ================================ 00:07:33.042 Supported: No 00:07:33.042 00:07:33.042 Persistent Memory Region Support 00:07:33.042 ================================ 00:07:33.042 Supported: No 00:07:33.042 00:07:33.042 Admin Command Set Attributes 00:07:33.042 ============================ 00:07:33.042 Security Send/Receive: Not Supported 00:07:33.042 Format NVM: Supported 00:07:33.042 Firmware Activate/Download: Not Supported 00:07:33.042 Namespace Management: Supported 00:07:33.042 Device Self-Test: Not Supported 00:07:33.042 Directives: Supported 00:07:33.042 NVMe-MI: Not Supported 00:07:33.042 Virtualization Management: Not Supported 00:07:33.042 Doorbell Buffer Config: Supported 00:07:33.042 Get LBA Status Capability: Not Supported 00:07:33.042 Command & Feature Lockdown Capability: Not Supported 00:07:33.042 Abort Command Limit: 4 00:07:33.042 Async Event Request Limit: 4 00:07:33.042 Number of Firmware Slots: N/A 00:07:33.042 Firmware Slot 1 Read-Only: N/A 00:07:33.042 Firmware Activation Without Reset: N/A 00:07:33.042 Multiple Update Detection Support: N/A 00:07:33.042 Firmware Update Granularity: No Information Provided 00:07:33.042 Per-Namespace SMART Log: Yes 00:07:33.042 Asymmetric Namespace Access Log Page: Not Supported 00:07:33.042 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:33.042 Command Effects Log Page: Supported 00:07:33.042 Get Log Page Extended Data: Supported 00:07:33.042 Telemetry Log Pages: Not Supported 00:07:33.042 Persistent Event Log Pages: Not Supported 00:07:33.042 Supported Log Pages Log Page: May Support 00:07:33.042 Commands Supported & Effects Log Page: Not Supported 00:07:33.042 Feature Identifiers & Effects Log Page:May Support 00:07:33.042 NVMe-MI Commands & Effects Log Page: May Support 00:07:33.042 Data Area 4 for Telemetry Log: Not Supported 00:07:33.042 Error Log Page Entries Supported: 1 00:07:33.042 Keep Alive: Not Supported 00:07:33.042 00:07:33.042 NVM Command Set Attributes 00:07:33.042 ========================== 00:07:33.042 Submission Queue Entry Size 00:07:33.042 Max: 64 00:07:33.042 Min: 64 00:07:33.042 Completion Queue Entry Size 00:07:33.042 Max: 16 00:07:33.042 Min: 16 00:07:33.042 Number of Namespaces: 256 00:07:33.042 Compare Command: Supported 00:07:33.042 Write Uncorrectable Command: Not Supported 00:07:33.042 Dataset Management Command: Supported 00:07:33.042 Write Zeroes Command: Supported 00:07:33.042 Set Features Save Field: Supported 00:07:33.042 Reservations: Not Supported 00:07:33.042 Timestamp: Supported 00:07:33.042 Copy: Supported 00:07:33.042 Volatile Write Cache: Present 00:07:33.042 Atomic Write Unit (Normal): 1 00:07:33.042 Atomic Write Unit (PFail): 1 00:07:33.042 Atomic Compare & Write Unit: 1 00:07:33.042 Fused Compare & Write: Not Supported 00:07:33.042 Scatter-Gather List 00:07:33.042 SGL Command Set: Supported 00:07:33.042 SGL Keyed: Not Supported 00:07:33.042 SGL Bit Bucket Descriptor: Not Supported 00:07:33.042 SGL Metadata Pointer: Not Supported 00:07:33.042 Oversized SGL: Not Supported 00:07:33.042 SGL Metadata Address: Not Supported 00:07:33.042 SGL Offset: Not Supported 00:07:33.042 Transport SGL Data Block: Not Supported 00:07:33.042 Replay Protected Memory Block: Not Supported 00:07:33.042 00:07:33.042 Firmware Slot Information 00:07:33.042 ========================= 00:07:33.042 Active slot: 1 00:07:33.042 Slot 1 Firmware Revision: 1.0 00:07:33.042 00:07:33.042 00:07:33.042 Commands Supported and Effects 00:07:33.042 ============================== 00:07:33.042 Admin Commands 00:07:33.042 -------------- 00:07:33.042 Delete I/O Submission Queue (00h): Supported 00:07:33.042 Create I/O Submission Queue (01h): Supported 00:07:33.042 Get Log Page (02h): Supported 00:07:33.043 Delete I/O Completion Queue (04h): Supported 00:07:33.043 Create I/O Completion Queue (05h): Supported 00:07:33.043 Identify (06h): Supported 00:07:33.043 Abort (08h): Supported 00:07:33.043 Set Features (09h): Supported 00:07:33.043 Get Features (0Ah): Supported 00:07:33.043 Asynchronous Event Request (0Ch): Supported 00:07:33.043 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:33.043 Directive Send (19h): Supported 00:07:33.043 Directive Receive (1Ah): Supported 00:07:33.043 Virtualization Management (1Ch): Supported 00:07:33.043 Doorbell Buffer Config (7Ch): Supported 00:07:33.043 Format NVM (80h): Supported LBA-Change 00:07:33.043 I/O Commands 00:07:33.043 ------------ 00:07:33.043 Flush (00h): Supported LBA-Change 00:07:33.043 Write (01h): Supported LBA-Change 00:07:33.043 Read (02h): Supported 00:07:33.043 Compare (05h): Supported 00:07:33.043 Write Zeroes (08h): Supported LBA-Change 00:07:33.043 Dataset Management (09h): Supported LBA-Change 00:07:33.043 Unknown (0Ch): Supported 00:07:33.043 Unknown (12h): Supported 00:07:33.043 Copy (19h): Supported LBA-Change 00:07:33.043 Unknown (1Dh): Supported LBA-Change 00:07:33.043 00:07:33.043 Error Log 00:07:33.043 ========= 00:07:33.043 00:07:33.043 Arbitration 00:07:33.043 =========== 00:07:33.043 Arbitration Burst: no limit 00:07:33.043 00:07:33.043 Power Management 00:07:33.043 ================ 00:07:33.043 Number of Power States: 1 00:07:33.043 Current Power State: Power State #0 00:07:33.043 Power State #0: 00:07:33.043 Max Power: 25.00 W 00:07:33.043 Non-Operational State: Operational 00:07:33.043 Entry Latency: 16 microseconds 00:07:33.043 Exit Latency: 4 microseconds 00:07:33.043 Relative Read Throughput: 0 00:07:33.043 Relative Read Latency: 0 00:07:33.043 Relative Write Throughput: 0 00:07:33.043 Relative Write Latency: 0 00:07:33.043 Idle Power: Not Reported 00:07:33.043 Active Power: Not Reported 00:07:33.043 Non-Operational Permissive Mode: Not Supported 00:07:33.043 00:07:33.043 Health Information 00:07:33.043 ================== 00:07:33.043 Critical Warnings: 00:07:33.043 Available Spare Space: OK 00:07:33.043 Temperature: OK 00:07:33.043 Device Reliability: OK 00:07:33.043 Read Only: No 00:07:33.043 Volatile Memory Backup: OK 00:07:33.043 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.043 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.043 Available Spare: 0% 00:07:33.043 Available Spare Threshold: 0% 00:07:33.043 Life Percentage Used: 0% 00:07:33.043 Data Units Read: 947 00:07:33.043 Data Units Written: 876 00:07:33.043 Host Read Commands: 39092 00:07:33.043 Host Write Commands: 38515 00:07:33.043 Controller Busy Time: 0 minutes 00:07:33.043 Power Cycles: 0 00:07:33.043 Power On Hours: 0 hours 00:07:33.043 Unsafe Shutdowns: 0 00:07:33.043 Unrecoverable Media Errors: 0 00:07:33.043 Lifetime Error Log Entries: 0 00:07:33.043 Warning Temperature Time: 0 minutes 00:07:33.043 Critical Temperature Time: 0 minutes 00:07:33.043 00:07:33.043 Number of Queues 00:07:33.043 ================ 00:07:33.043 Number of I/O Submission Queues: 64 00:07:33.043 Number of I/O Completion Queues: 64 00:07:33.043 00:07:33.043 ZNS Specific Controller Data 00:07:33.043 ============================ 00:07:33.043 Zone Append Size Limit: 0 00:07:33.043 00:07:33.043 00:07:33.043 Active Namespaces 00:07:33.043 ================= 00:07:33.043 Namespace ID:1 00:07:33.043 Error Recovery Timeout: Unlimited 00:07:33.043 Command Set Identifier: NVM (00h) 00:07:33.043 Deallocate: Supported 00:07:33.043 Deallocated/Unwritten Error: Supported 00:07:33.043 Deallocated Read Value: All 0x00 00:07:33.043 Deallocate in Write Zeroes: Not Supported 00:07:33.043 Deallocated Guard Field: 0xFFFF 00:07:33.043 Flush: Supported 00:07:33.043 Reservation: Not Supported 00:07:33.043 Namespace Sharing Capabilities: Multiple Controllers 00:07:33.043 Size (in LBAs): 262144 (1GiB) 00:07:33.043 Capacity (in LBAs): 262144 (1GiB) 00:07:33.043 Utilization (in LBAs): 262144 (1GiB) 00:07:33.043 Thin Provisioning: Not Supported 00:07:33.043 Per-NS Atomic Units: No 00:07:33.043 Maximum Single Source Range Length: 128 00:07:33.043 Maximum Copy Length: 128 00:07:33.043 Maximum Source Range Count: 128 00:07:33.043 NGUID/EUI64 Never Reused: No 00:07:33.043 Namespace Write Protected: No 00:07:33.043 Endurance group ID: 1 00:07:33.043 Number of LBA Formats: 8 00:07:33.043 Current LBA Format: LBA Format #04 00:07:33.043 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.043 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.043 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.043 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.043 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.043 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.043 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.043 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.043 00:07:33.043 Get Feature FDP: 00:07:33.043 ================ 00:07:33.043 Enabled: Yes 00:07:33.043 FDP configuration index: 0 00:07:33.043 00:07:33.043 FDP configurations log page 00:07:33.043 =========================== 00:07:33.043 Number of FDP configurations: 1 00:07:33.043 Version: 0 00:07:33.043 Size: 112 00:07:33.043 FDP Configuration Descriptor: 0 00:07:33.043 Descriptor Size: 96 00:07:33.043 Reclaim Group Identifier format: 2 00:07:33.043 FDP Volatile Write Cache: Not Present 00:07:33.043 FDP Configuration: Valid 00:07:33.043 Vendor Specific Size: 0 00:07:33.043 Number of Reclaim Groups: 2 00:07:33.043 Number of Recalim Unit Handles: 8 00:07:33.043 Max Placement Identifiers: 128 00:07:33.043 Number of Namespaces Suppprted: 256 00:07:33.043 Reclaim unit Nominal Size: 6000000 bytes 00:07:33.043 Estimated Reclaim Unit Time Limit: Not Reported 00:07:33.043 RUH Desc #000: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #001: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #002: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #003: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #004: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #005: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #006: RUH Type: Initially Isolated 00:07:33.043 RUH Desc #007: RUH Type: Initially Isolated 00:07:33.043 00:07:33.043 FDP reclaim unit handle usage log page 00:07:33.302 ====================================== 00:07:33.302 Number of Reclaim Unit Handles: 8 00:07:33.302 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:33.302 RUH Usage Desc #001: RUH Attributes: Unused 00:07:33.302 RUH Usage Desc #002: RUH Attributes: Unused 00:07:33.302 RUH Usage Desc #003: RUH Attributes: Unused 00:07:33.302 RUH Usage Desc #004: RUH Attributes: Unused 00:07:33.302 RUH Usage Desc #005: RUH Attributes: Unused 00:07:33.302 RUH Usage Desc #006: RUH Attributes: Unused 00:07:33.302 RUH Usage Desc #007: RUH Attributes: Unused 00:07:33.302 00:07:33.302 FDP statistics log page 00:07:33.302 ======================= 00:07:33.302 Host bytes with metadata written: 545824768 00:07:33.302 Media bytes with metadata written: 548851712 00:07:33.302 Media bytes erased: 0 00:07:33.302 00:07:33.302 FDP events log page 00:07:33.302 =================== 00:07:33.302 Number of FDP events: 0 00:07:33.302 00:07:33.302 NVM Specific Namespace Data 00:07:33.302 =========================== 00:07:33.302 Logical Block Storage Tag Mask: 0 00:07:33.302 Protection Information Capabilities: 00:07:33.302 16b Guard Protection Information Storage Tag Support: No 00:07:33.302 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.302 Storage Tag Check Read Support: No 00:07:33.302 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.302 00:07:33.302 real 0m1.255s 00:07:33.302 user 0m0.448s 00:07:33.302 sys 0m0.581s 00:07:33.302 20:34:50 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.302 ************************************ 00:07:33.302 20:34:50 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:33.302 END TEST nvme_identify 00:07:33.302 ************************************ 00:07:33.302 20:34:50 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:33.302 20:34:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.302 20:34:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.302 20:34:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.302 ************************************ 00:07:33.302 START TEST nvme_perf 00:07:33.302 ************************************ 00:07:33.302 20:34:50 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:33.302 20:34:50 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:34.678 Initializing NVMe Controllers 00:07:34.678 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.678 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.678 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.678 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.678 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:34.678 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:34.678 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:34.678 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:34.678 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:34.678 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:34.678 Initialization complete. Launching workers. 00:07:34.678 ======================================================== 00:07:34.678 Latency(us) 00:07:34.678 Device Information : IOPS MiB/s Average min max 00:07:34.678 PCIE (0000:00:10.0) NSID 1 from core 0: 14840.17 173.91 8636.47 5634.11 32957.75 00:07:34.678 PCIE (0000:00:11.0) NSID 1 from core 0: 14840.17 173.91 8624.94 5731.43 31451.47 00:07:34.678 PCIE (0000:00:13.0) NSID 1 from core 0: 14840.17 173.91 8611.83 5718.52 30975.19 00:07:34.678 PCIE (0000:00:12.0) NSID 1 from core 0: 14840.17 173.91 8598.48 5716.29 29584.08 00:07:34.678 PCIE (0000:00:12.0) NSID 2 from core 0: 14840.17 173.91 8584.78 5728.28 28172.19 00:07:34.678 PCIE (0000:00:12.0) NSID 3 from core 0: 14840.17 173.91 8571.71 5737.47 26760.66 00:07:34.678 ======================================================== 00:07:34.678 Total : 89041.04 1043.45 8604.70 5634.11 32957.75 00:07:34.678 00:07:34.678 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.678 ================================================================================= 00:07:34.678 1.00000% : 5797.415us 00:07:34.678 10.00000% : 6074.683us 00:07:34.678 25.00000% : 6351.951us 00:07:34.678 50.00000% : 6805.662us 00:07:34.678 75.00000% : 9628.751us 00:07:34.678 90.00000% : 14922.043us 00:07:34.678 95.00000% : 16736.886us 00:07:34.678 98.00000% : 18148.431us 00:07:34.678 99.00000% : 19257.502us 00:07:34.678 99.50000% : 25710.277us 00:07:34.678 99.90000% : 32667.175us 00:07:34.678 99.99000% : 33070.474us 00:07:34.678 99.99900% : 33070.474us 00:07:34.678 99.99990% : 33070.474us 00:07:34.678 99.99999% : 33070.474us 00:07:34.678 00:07:34.678 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.678 ================================================================================= 00:07:34.678 1.00000% : 5873.034us 00:07:34.678 10.00000% : 6125.095us 00:07:34.678 25.00000% : 6351.951us 00:07:34.678 50.00000% : 6755.249us 00:07:34.678 75.00000% : 9679.163us 00:07:34.678 90.00000% : 15022.868us 00:07:34.678 95.00000% : 16837.711us 00:07:34.678 98.00000% : 18047.606us 00:07:34.678 99.00000% : 19660.800us 00:07:34.678 99.50000% : 24702.031us 00:07:34.678 99.90000% : 31255.631us 00:07:34.678 99.99000% : 31457.280us 00:07:34.678 99.99900% : 31457.280us 00:07:34.678 99.99990% : 31457.280us 00:07:34.678 99.99999% : 31457.280us 00:07:34.678 00:07:34.678 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.678 ================================================================================= 00:07:34.678 1.00000% : 5898.240us 00:07:34.678 10.00000% : 6125.095us 00:07:34.678 25.00000% : 6377.157us 00:07:34.678 50.00000% : 6755.249us 00:07:34.678 75.00000% : 9628.751us 00:07:34.678 90.00000% : 14922.043us 00:07:34.678 95.00000% : 16837.711us 00:07:34.678 98.00000% : 18350.080us 00:07:34.678 99.00000% : 20467.397us 00:07:34.678 99.50000% : 24097.083us 00:07:34.678 99.90000% : 30650.683us 00:07:34.678 99.99000% : 31053.982us 00:07:34.678 99.99900% : 31053.982us 00:07:34.678 99.99990% : 31053.982us 00:07:34.678 99.99999% : 31053.982us 00:07:34.678 00:07:34.678 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.678 ================================================================================= 00:07:34.678 1.00000% : 5898.240us 00:07:34.678 10.00000% : 6125.095us 00:07:34.678 25.00000% : 6377.157us 00:07:34.678 50.00000% : 6755.249us 00:07:34.678 75.00000% : 9527.926us 00:07:34.678 90.00000% : 14619.569us 00:07:34.678 95.00000% : 17039.360us 00:07:34.678 98.00000% : 18854.203us 00:07:34.678 99.00000% : 19963.274us 00:07:34.678 99.50000% : 22584.714us 00:07:34.678 99.90000% : 29239.138us 00:07:34.678 99.99000% : 29642.437us 00:07:34.678 99.99900% : 29642.437us 00:07:34.678 99.99990% : 29642.437us 00:07:34.678 99.99999% : 29642.437us 00:07:34.678 00:07:34.678 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.678 ================================================================================= 00:07:34.678 1.00000% : 5898.240us 00:07:34.678 10.00000% : 6125.095us 00:07:34.678 25.00000% : 6351.951us 00:07:34.678 50.00000% : 6755.249us 00:07:34.678 75.00000% : 9628.751us 00:07:34.678 90.00000% : 14518.745us 00:07:34.678 95.00000% : 17140.185us 00:07:34.678 98.00000% : 18652.554us 00:07:34.678 99.00000% : 19358.326us 00:07:34.678 99.50000% : 20870.695us 00:07:34.678 99.90000% : 27827.594us 00:07:34.678 99.99000% : 28230.892us 00:07:34.678 99.99900% : 28230.892us 00:07:34.678 99.99990% : 28230.892us 00:07:34.678 99.99999% : 28230.892us 00:07:34.678 00:07:34.678 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.678 ================================================================================= 00:07:34.678 1.00000% : 5873.034us 00:07:34.678 10.00000% : 6125.095us 00:07:34.678 25.00000% : 6351.951us 00:07:34.678 50.00000% : 6755.249us 00:07:34.678 75.00000% : 9578.338us 00:07:34.678 90.00000% : 14720.394us 00:07:34.678 95.00000% : 17241.009us 00:07:34.678 98.00000% : 18249.255us 00:07:34.678 99.00000% : 18753.378us 00:07:34.678 99.50000% : 19559.975us 00:07:34.678 99.90000% : 26416.049us 00:07:34.678 99.99000% : 26819.348us 00:07:34.678 99.99900% : 26819.348us 00:07:34.678 99.99990% : 26819.348us 00:07:34.678 99.99999% : 26819.348us 00:07:34.678 00:07:34.678 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.678 ============================================================================== 00:07:34.678 Range in us Cumulative IO count 00:07:34.678 5620.972 - 5646.178: 0.0067% ( 1) 00:07:34.678 5646.178 - 5671.385: 0.0335% ( 4) 00:07:34.678 5671.385 - 5696.591: 0.0604% ( 4) 00:07:34.678 5696.591 - 5721.797: 0.1878% ( 19) 00:07:34.678 5721.797 - 5747.003: 0.3353% ( 22) 00:07:34.678 5747.003 - 5772.209: 0.6438% ( 46) 00:07:34.678 5772.209 - 5797.415: 1.0059% ( 54) 00:07:34.679 5797.415 - 5822.622: 1.4887% ( 72) 00:07:34.679 5822.622 - 5847.828: 2.0453% ( 83) 00:07:34.679 5847.828 - 5873.034: 2.7293% ( 102) 00:07:34.679 5873.034 - 5898.240: 3.4670% ( 110) 00:07:34.679 5898.240 - 5923.446: 4.3053% ( 125) 00:07:34.679 5923.446 - 5948.652: 5.2776% ( 145) 00:07:34.679 5948.652 - 5973.858: 6.1695% ( 133) 00:07:34.679 5973.858 - 5999.065: 7.1553% ( 147) 00:07:34.679 5999.065 - 6024.271: 8.2283% ( 160) 00:07:34.679 6024.271 - 6049.477: 9.3415% ( 166) 00:07:34.679 6049.477 - 6074.683: 10.5687% ( 183) 00:07:34.679 6074.683 - 6099.889: 11.8361% ( 189) 00:07:34.679 6099.889 - 6125.095: 13.2444% ( 210) 00:07:34.679 6125.095 - 6150.302: 14.4917% ( 186) 00:07:34.679 6150.302 - 6175.508: 15.8597% ( 204) 00:07:34.679 6175.508 - 6200.714: 17.1607% ( 194) 00:07:34.679 6200.714 - 6225.920: 18.6293% ( 219) 00:07:34.679 6225.920 - 6251.126: 19.9772% ( 201) 00:07:34.679 6251.126 - 6276.332: 21.4257% ( 216) 00:07:34.679 6276.332 - 6301.538: 22.7870% ( 203) 00:07:34.679 6301.538 - 6326.745: 24.2825% ( 223) 00:07:34.679 6326.745 - 6351.951: 25.7041% ( 212) 00:07:34.679 6351.951 - 6377.157: 27.1258% ( 212) 00:07:34.679 6377.157 - 6402.363: 28.5743% ( 216) 00:07:34.679 6402.363 - 6427.569: 30.1569% ( 236) 00:07:34.679 6427.569 - 6452.775: 31.5652% ( 210) 00:07:34.679 6452.775 - 6503.188: 34.6164% ( 455) 00:07:34.679 6503.188 - 6553.600: 37.6408% ( 451) 00:07:34.679 6553.600 - 6604.012: 40.6786% ( 453) 00:07:34.679 6604.012 - 6654.425: 43.7031% ( 451) 00:07:34.679 6654.425 - 6704.837: 46.7275% ( 451) 00:07:34.679 6704.837 - 6755.249: 49.5373% ( 419) 00:07:34.679 6755.249 - 6805.662: 52.1593% ( 391) 00:07:34.679 6805.662 - 6856.074: 54.2918% ( 318) 00:07:34.679 6856.074 - 6906.486: 56.0958% ( 269) 00:07:34.679 6906.486 - 6956.898: 57.7119% ( 241) 00:07:34.679 6956.898 - 7007.311: 59.1135% ( 209) 00:07:34.679 7007.311 - 7057.723: 60.2133% ( 164) 00:07:34.679 7057.723 - 7108.135: 61.0582% ( 126) 00:07:34.679 7108.135 - 7158.548: 61.7154% ( 98) 00:07:34.679 7158.548 - 7208.960: 62.3458% ( 94) 00:07:34.679 7208.960 - 7259.372: 62.7884% ( 66) 00:07:34.679 7259.372 - 7309.785: 63.1706% ( 57) 00:07:34.679 7309.785 - 7360.197: 63.5461% ( 56) 00:07:34.679 7360.197 - 7410.609: 63.9418% ( 59) 00:07:34.679 7410.609 - 7461.022: 64.2503% ( 46) 00:07:34.679 7461.022 - 7511.434: 64.5252% ( 41) 00:07:34.679 7511.434 - 7561.846: 64.7197% ( 29) 00:07:34.679 7561.846 - 7612.258: 64.9946% ( 41) 00:07:34.679 7612.258 - 7662.671: 65.2897% ( 44) 00:07:34.679 7662.671 - 7713.083: 65.5378% ( 37) 00:07:34.679 7713.083 - 7763.495: 65.7792% ( 36) 00:07:34.679 7763.495 - 7813.908: 65.9871% ( 31) 00:07:34.679 7813.908 - 7864.320: 66.1816% ( 29) 00:07:34.679 7864.320 - 7914.732: 66.3560% ( 26) 00:07:34.679 7914.732 - 7965.145: 66.5303% ( 26) 00:07:34.679 7965.145 - 8015.557: 66.7248% ( 29) 00:07:34.679 8015.557 - 8065.969: 66.8857% ( 24) 00:07:34.679 8065.969 - 8116.382: 67.1271% ( 36) 00:07:34.679 8116.382 - 8166.794: 67.3619% ( 35) 00:07:34.679 8166.794 - 8217.206: 67.5899% ( 34) 00:07:34.679 8217.206 - 8267.618: 67.8916% ( 45) 00:07:34.679 8267.618 - 8318.031: 68.2135% ( 48) 00:07:34.679 8318.031 - 8368.443: 68.5153% ( 45) 00:07:34.679 8368.443 - 8418.855: 68.7299% ( 32) 00:07:34.679 8418.855 - 8469.268: 69.0048% ( 41) 00:07:34.679 8469.268 - 8519.680: 69.3066% ( 45) 00:07:34.679 8519.680 - 8570.092: 69.6084% ( 45) 00:07:34.679 8570.092 - 8620.505: 69.8900% ( 42) 00:07:34.679 8620.505 - 8670.917: 70.1314% ( 36) 00:07:34.679 8670.917 - 8721.329: 70.4198% ( 43) 00:07:34.679 8721.329 - 8771.742: 70.6813% ( 39) 00:07:34.679 8771.742 - 8822.154: 70.9831% ( 45) 00:07:34.679 8822.154 - 8872.566: 71.2446% ( 39) 00:07:34.679 8872.566 - 8922.978: 71.5732% ( 49) 00:07:34.679 8922.978 - 8973.391: 71.8482% ( 41) 00:07:34.679 8973.391 - 9023.803: 72.1499% ( 45) 00:07:34.679 9023.803 - 9074.215: 72.4651% ( 47) 00:07:34.679 9074.215 - 9124.628: 72.7401% ( 41) 00:07:34.679 9124.628 - 9175.040: 73.0217% ( 42) 00:07:34.679 9175.040 - 9225.452: 73.2900% ( 40) 00:07:34.679 9225.452 - 9275.865: 73.5515% ( 39) 00:07:34.679 9275.865 - 9326.277: 73.8063% ( 38) 00:07:34.679 9326.277 - 9376.689: 74.0545% ( 37) 00:07:34.679 9376.689 - 9427.102: 74.2623% ( 31) 00:07:34.679 9427.102 - 9477.514: 74.4635% ( 30) 00:07:34.679 9477.514 - 9527.926: 74.7116% ( 37) 00:07:34.679 9527.926 - 9578.338: 74.9128% ( 30) 00:07:34.679 9578.338 - 9628.751: 75.1542% ( 36) 00:07:34.679 9628.751 - 9679.163: 75.3286% ( 26) 00:07:34.679 9679.163 - 9729.575: 75.5164% ( 28) 00:07:34.679 9729.575 - 9779.988: 75.6907% ( 26) 00:07:34.679 9779.988 - 9830.400: 75.8785% ( 28) 00:07:34.679 9830.400 - 9880.812: 76.0528% ( 26) 00:07:34.679 9880.812 - 9931.225: 76.2406% ( 28) 00:07:34.679 9931.225 - 9981.637: 76.4686% ( 34) 00:07:34.679 9981.637 - 10032.049: 76.6229% ( 23) 00:07:34.679 10032.049 - 10082.462: 76.8240% ( 30) 00:07:34.679 10082.462 - 10132.874: 76.9850% ( 24) 00:07:34.679 10132.874 - 10183.286: 77.1392% ( 23) 00:07:34.679 10183.286 - 10233.698: 77.2935% ( 23) 00:07:34.679 10233.698 - 10284.111: 77.4611% ( 25) 00:07:34.679 10284.111 - 10334.523: 77.6153% ( 23) 00:07:34.679 10334.523 - 10384.935: 77.7964% ( 27) 00:07:34.679 10384.935 - 10435.348: 77.9238% ( 19) 00:07:34.679 10435.348 - 10485.760: 78.0714% ( 22) 00:07:34.679 10485.760 - 10536.172: 78.2725% ( 30) 00:07:34.679 10536.172 - 10586.585: 78.4737% ( 30) 00:07:34.679 10586.585 - 10636.997: 78.6548% ( 27) 00:07:34.679 10636.997 - 10687.409: 78.8694% ( 32) 00:07:34.679 10687.409 - 10737.822: 79.0169% ( 22) 00:07:34.679 10737.822 - 10788.234: 79.1644% ( 22) 00:07:34.679 10788.234 - 10838.646: 79.3187% ( 23) 00:07:34.679 10838.646 - 10889.058: 79.5131% ( 29) 00:07:34.679 10889.058 - 10939.471: 79.7076% ( 29) 00:07:34.679 10939.471 - 10989.883: 79.8820% ( 26) 00:07:34.679 10989.883 - 11040.295: 80.0764% ( 29) 00:07:34.679 11040.295 - 11090.708: 80.2910% ( 32) 00:07:34.679 11090.708 - 11141.120: 80.5190% ( 34) 00:07:34.679 11141.120 - 11191.532: 80.6800% ( 24) 00:07:34.679 11191.532 - 11241.945: 80.8409% ( 24) 00:07:34.679 11241.945 - 11292.357: 80.9885% ( 22) 00:07:34.679 11292.357 - 11342.769: 81.1293% ( 21) 00:07:34.679 11342.769 - 11393.182: 81.3171% ( 28) 00:07:34.679 11393.182 - 11443.594: 81.5048% ( 28) 00:07:34.679 11443.594 - 11494.006: 81.6926% ( 28) 00:07:34.679 11494.006 - 11544.418: 81.9474% ( 38) 00:07:34.679 11544.418 - 11594.831: 82.1352% ( 28) 00:07:34.679 11594.831 - 11645.243: 82.2693% ( 20) 00:07:34.679 11645.243 - 11695.655: 82.4370% ( 25) 00:07:34.679 11695.655 - 11746.068: 82.6583% ( 33) 00:07:34.679 11746.068 - 11796.480: 82.8125% ( 23) 00:07:34.679 11796.480 - 11846.892: 82.9600% ( 22) 00:07:34.679 11846.892 - 11897.305: 83.1344% ( 26) 00:07:34.679 11897.305 - 11947.717: 83.2886% ( 23) 00:07:34.680 11947.717 - 11998.129: 83.4160% ( 19) 00:07:34.680 11998.129 - 12048.542: 83.6105% ( 29) 00:07:34.680 12048.542 - 12098.954: 83.7312% ( 18) 00:07:34.680 12098.954 - 12149.366: 83.8586% ( 19) 00:07:34.680 12149.366 - 12199.778: 83.9391% ( 12) 00:07:34.680 12199.778 - 12250.191: 84.0799% ( 21) 00:07:34.680 12250.191 - 12300.603: 84.1537% ( 11) 00:07:34.680 12300.603 - 12351.015: 84.2945% ( 21) 00:07:34.680 12351.015 - 12401.428: 84.3884% ( 14) 00:07:34.680 12401.428 - 12451.840: 84.4957% ( 16) 00:07:34.680 12451.840 - 12502.252: 84.6231% ( 19) 00:07:34.680 12502.252 - 12552.665: 84.7103% ( 13) 00:07:34.680 12552.665 - 12603.077: 84.7774% ( 10) 00:07:34.680 12603.077 - 12653.489: 84.8780% ( 15) 00:07:34.680 12653.489 - 12703.902: 84.9920% ( 17) 00:07:34.680 12703.902 - 12754.314: 85.1060% ( 17) 00:07:34.680 12754.314 - 12804.726: 85.2535% ( 22) 00:07:34.680 12804.726 - 12855.138: 85.3474% ( 14) 00:07:34.680 12855.138 - 12905.551: 85.4681% ( 18) 00:07:34.680 12905.551 - 13006.375: 85.6827% ( 32) 00:07:34.680 13006.375 - 13107.200: 85.8637% ( 27) 00:07:34.680 13107.200 - 13208.025: 86.0649% ( 30) 00:07:34.680 13208.025 - 13308.849: 86.3063% ( 36) 00:07:34.680 13308.849 - 13409.674: 86.5746% ( 40) 00:07:34.680 13409.674 - 13510.498: 86.8629% ( 43) 00:07:34.680 13510.498 - 13611.323: 87.0909% ( 34) 00:07:34.680 13611.323 - 13712.148: 87.3994% ( 46) 00:07:34.680 13712.148 - 13812.972: 87.6677% ( 40) 00:07:34.680 13812.972 - 13913.797: 87.8688% ( 30) 00:07:34.680 13913.797 - 14014.622: 88.0767% ( 31) 00:07:34.680 14014.622 - 14115.446: 88.2846% ( 31) 00:07:34.680 14115.446 - 14216.271: 88.5327% ( 37) 00:07:34.680 14216.271 - 14317.095: 88.7339% ( 30) 00:07:34.680 14317.095 - 14417.920: 88.9351% ( 30) 00:07:34.680 14417.920 - 14518.745: 89.1765% ( 36) 00:07:34.680 14518.745 - 14619.569: 89.4447% ( 40) 00:07:34.680 14619.569 - 14720.394: 89.6795% ( 35) 00:07:34.680 14720.394 - 14821.218: 89.8739% ( 29) 00:07:34.680 14821.218 - 14922.043: 90.1891% ( 47) 00:07:34.680 14922.043 - 15022.868: 90.4641% ( 41) 00:07:34.680 15022.868 - 15123.692: 90.7390% ( 41) 00:07:34.680 15123.692 - 15224.517: 91.0072% ( 40) 00:07:34.680 15224.517 - 15325.342: 91.2487% ( 36) 00:07:34.680 15325.342 - 15426.166: 91.5035% ( 38) 00:07:34.680 15426.166 - 15526.991: 91.7315% ( 34) 00:07:34.680 15526.991 - 15627.815: 91.9528% ( 33) 00:07:34.680 15627.815 - 15728.640: 92.1137% ( 24) 00:07:34.680 15728.640 - 15829.465: 92.3954% ( 42) 00:07:34.680 15829.465 - 15930.289: 92.6905% ( 44) 00:07:34.680 15930.289 - 16031.114: 92.9922% ( 45) 00:07:34.680 16031.114 - 16131.938: 93.2605% ( 40) 00:07:34.680 16131.938 - 16232.763: 93.5287% ( 40) 00:07:34.680 16232.763 - 16333.588: 93.7567% ( 34) 00:07:34.680 16333.588 - 16434.412: 94.0719% ( 47) 00:07:34.680 16434.412 - 16535.237: 94.3737% ( 45) 00:07:34.680 16535.237 - 16636.062: 94.7425% ( 55) 00:07:34.680 16636.062 - 16736.886: 95.0644% ( 48) 00:07:34.680 16736.886 - 16837.711: 95.3594% ( 44) 00:07:34.680 16837.711 - 16938.535: 95.6143% ( 38) 00:07:34.680 16938.535 - 17039.360: 95.9496% ( 50) 00:07:34.680 17039.360 - 17140.185: 96.2111% ( 39) 00:07:34.680 17140.185 - 17241.009: 96.4458% ( 35) 00:07:34.680 17241.009 - 17341.834: 96.7208% ( 41) 00:07:34.680 17341.834 - 17442.658: 96.9555% ( 35) 00:07:34.680 17442.658 - 17543.483: 97.2103% ( 38) 00:07:34.680 17543.483 - 17644.308: 97.4316% ( 33) 00:07:34.680 17644.308 - 17745.132: 97.6060% ( 26) 00:07:34.680 17745.132 - 17845.957: 97.7133% ( 16) 00:07:34.680 17845.957 - 17946.782: 97.8205% ( 16) 00:07:34.680 17946.782 - 18047.606: 97.9278% ( 16) 00:07:34.680 18047.606 - 18148.431: 98.0486% ( 18) 00:07:34.680 18148.431 - 18249.255: 98.1156% ( 10) 00:07:34.680 18249.255 - 18350.080: 98.2296% ( 17) 00:07:34.680 18350.080 - 18450.905: 98.3369% ( 16) 00:07:34.680 18450.905 - 18551.729: 98.4308% ( 14) 00:07:34.680 18551.729 - 18652.554: 98.5180% ( 13) 00:07:34.680 18652.554 - 18753.378: 98.6186% ( 15) 00:07:34.680 18753.378 - 18854.203: 98.7124% ( 14) 00:07:34.680 18854.203 - 18955.028: 98.7929% ( 12) 00:07:34.680 18955.028 - 19055.852: 98.8667% ( 11) 00:07:34.680 19055.852 - 19156.677: 98.9606% ( 14) 00:07:34.680 19156.677 - 19257.502: 99.0008% ( 6) 00:07:34.680 19257.502 - 19358.326: 99.0477% ( 7) 00:07:34.680 19358.326 - 19459.151: 99.1148% ( 10) 00:07:34.680 19459.151 - 19559.975: 99.1282% ( 2) 00:07:34.680 19559.975 - 19660.800: 99.1416% ( 2) 00:07:34.680 23996.258 - 24097.083: 99.1483% ( 1) 00:07:34.680 24097.083 - 24197.908: 99.1752% ( 4) 00:07:34.680 24197.908 - 24298.732: 99.1886% ( 2) 00:07:34.680 24298.732 - 24399.557: 99.2087% ( 3) 00:07:34.680 24399.557 - 24500.382: 99.2355% ( 4) 00:07:34.680 24500.382 - 24601.206: 99.2489% ( 2) 00:07:34.680 24601.206 - 24702.031: 99.2556% ( 1) 00:07:34.680 24702.031 - 24802.855: 99.2892% ( 5) 00:07:34.680 24802.855 - 24903.680: 99.3160% ( 4) 00:07:34.680 24903.680 - 25004.505: 99.3361% ( 3) 00:07:34.680 25004.505 - 25105.329: 99.3562% ( 3) 00:07:34.680 25105.329 - 25206.154: 99.3830% ( 4) 00:07:34.680 25206.154 - 25306.978: 99.4032% ( 3) 00:07:34.680 25306.978 - 25407.803: 99.4300% ( 4) 00:07:34.680 25407.803 - 25508.628: 99.4568% ( 4) 00:07:34.680 25508.628 - 25609.452: 99.4836% ( 4) 00:07:34.680 25609.452 - 25710.277: 99.5038% ( 3) 00:07:34.680 25710.277 - 25811.102: 99.5306% ( 4) 00:07:34.680 25811.102 - 26012.751: 99.5641% ( 5) 00:07:34.680 26012.751 - 26214.400: 99.5708% ( 1) 00:07:34.680 31053.982 - 31255.631: 99.5976% ( 4) 00:07:34.680 31255.631 - 31457.280: 99.6580% ( 9) 00:07:34.680 31457.280 - 31658.929: 99.6982% ( 6) 00:07:34.680 31658.929 - 31860.578: 99.7519% ( 8) 00:07:34.680 31860.578 - 32062.228: 99.7787% ( 4) 00:07:34.680 32062.228 - 32263.877: 99.8323% ( 8) 00:07:34.680 32263.877 - 32465.526: 99.8793% ( 7) 00:07:34.680 32465.526 - 32667.175: 99.9262% ( 7) 00:07:34.680 32667.175 - 32868.825: 99.9799% ( 8) 00:07:34.680 32868.825 - 33070.474: 100.0000% ( 3) 00:07:34.680 00:07:34.680 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.680 ============================================================================== 00:07:34.680 Range in us Cumulative IO count 00:07:34.680 5721.797 - 5747.003: 0.0201% ( 3) 00:07:34.680 5747.003 - 5772.209: 0.0671% ( 7) 00:07:34.680 5772.209 - 5797.415: 0.1408% ( 11) 00:07:34.680 5797.415 - 5822.622: 0.3286% ( 28) 00:07:34.680 5822.622 - 5847.828: 0.7511% ( 63) 00:07:34.680 5847.828 - 5873.034: 1.0663% ( 47) 00:07:34.680 5873.034 - 5898.240: 1.5558% ( 73) 00:07:34.680 5898.240 - 5923.446: 2.1727% ( 92) 00:07:34.680 5923.446 - 5948.652: 2.8903% ( 107) 00:07:34.680 5948.652 - 5973.858: 3.7688% ( 131) 00:07:34.680 5973.858 - 5999.065: 4.6808% ( 136) 00:07:34.680 5999.065 - 6024.271: 5.6599% ( 146) 00:07:34.680 6024.271 - 6049.477: 6.8670% ( 180) 00:07:34.680 6049.477 - 6074.683: 8.0606% ( 178) 00:07:34.680 6074.683 - 6099.889: 9.3012% ( 185) 00:07:34.680 6099.889 - 6125.095: 10.7296% ( 213) 00:07:34.680 6125.095 - 6150.302: 12.2385% ( 225) 00:07:34.680 6150.302 - 6175.508: 13.6870% ( 216) 00:07:34.680 6175.508 - 6200.714: 15.2092% ( 227) 00:07:34.680 6200.714 - 6225.920: 16.8254% ( 241) 00:07:34.680 6225.920 - 6251.126: 18.3745% ( 231) 00:07:34.680 6251.126 - 6276.332: 19.9973% ( 242) 00:07:34.680 6276.332 - 6301.538: 21.5933% ( 238) 00:07:34.680 6301.538 - 6326.745: 23.4040% ( 270) 00:07:34.680 6326.745 - 6351.951: 25.1207% ( 256) 00:07:34.680 6351.951 - 6377.157: 26.8710% ( 261) 00:07:34.680 6377.157 - 6402.363: 28.5609% ( 252) 00:07:34.680 6402.363 - 6427.569: 30.2776% ( 256) 00:07:34.680 6427.569 - 6452.775: 31.9005% ( 242) 00:07:34.680 6452.775 - 6503.188: 35.3943% ( 521) 00:07:34.680 6503.188 - 6553.600: 38.8747% ( 519) 00:07:34.680 6553.600 - 6604.012: 42.3954% ( 525) 00:07:34.680 6604.012 - 6654.425: 45.7752% ( 504) 00:07:34.680 6654.425 - 6704.837: 48.7124% ( 438) 00:07:34.680 6704.837 - 6755.249: 51.4150% ( 403) 00:07:34.680 6755.249 - 6805.662: 53.6950% ( 340) 00:07:34.680 6805.662 - 6856.074: 55.7269% ( 303) 00:07:34.680 6856.074 - 6906.486: 57.4705% ( 260) 00:07:34.680 6906.486 - 6956.898: 58.8452% ( 205) 00:07:34.680 6956.898 - 7007.311: 59.9316% ( 162) 00:07:34.680 7007.311 - 7057.723: 60.8101% ( 131) 00:07:34.680 7057.723 - 7108.135: 61.5075% ( 104) 00:07:34.680 7108.135 - 7158.548: 62.1178% ( 91) 00:07:34.680 7158.548 - 7208.960: 62.5939% ( 71) 00:07:34.680 7208.960 - 7259.372: 63.0164% ( 63) 00:07:34.680 7259.372 - 7309.785: 63.4053% ( 58) 00:07:34.680 7309.785 - 7360.197: 63.7741% ( 55) 00:07:34.680 7360.197 - 7410.609: 64.1027% ( 49) 00:07:34.680 7410.609 - 7461.022: 64.4246% ( 48) 00:07:34.680 7461.022 - 7511.434: 64.7264% ( 45) 00:07:34.680 7511.434 - 7561.846: 65.0282% ( 45) 00:07:34.680 7561.846 - 7612.258: 65.3299% ( 45) 00:07:34.680 7612.258 - 7662.671: 65.5714% ( 36) 00:07:34.680 7662.671 - 7713.083: 65.8463% ( 41) 00:07:34.680 7713.083 - 7763.495: 66.0877% ( 36) 00:07:34.680 7763.495 - 7813.908: 66.2621% ( 26) 00:07:34.680 7813.908 - 7864.320: 66.4431% ( 27) 00:07:34.680 7864.320 - 7914.732: 66.6913% ( 37) 00:07:34.680 7914.732 - 7965.145: 66.9193% ( 34) 00:07:34.680 7965.145 - 8015.557: 67.1741% ( 38) 00:07:34.680 8015.557 - 8065.969: 67.4155% ( 36) 00:07:34.680 8065.969 - 8116.382: 67.6234% ( 31) 00:07:34.680 8116.382 - 8166.794: 67.8179% ( 29) 00:07:34.680 8166.794 - 8217.206: 68.0056% ( 28) 00:07:34.680 8217.206 - 8267.618: 68.2001% ( 29) 00:07:34.680 8267.618 - 8318.031: 68.4147% ( 32) 00:07:34.681 8318.031 - 8368.443: 68.6561% ( 36) 00:07:34.681 8368.443 - 8418.855: 68.9042% ( 37) 00:07:34.681 8418.855 - 8469.268: 69.1255% ( 33) 00:07:34.681 8469.268 - 8519.680: 69.4005% ( 41) 00:07:34.681 8519.680 - 8570.092: 69.7224% ( 48) 00:07:34.681 8570.092 - 8620.505: 69.9906% ( 40) 00:07:34.681 8620.505 - 8670.917: 70.2387% ( 37) 00:07:34.681 8670.917 - 8721.329: 70.4802% ( 36) 00:07:34.681 8721.329 - 8771.742: 70.7350% ( 38) 00:07:34.681 8771.742 - 8822.154: 70.9898% ( 38) 00:07:34.681 8822.154 - 8872.566: 71.2245% ( 35) 00:07:34.681 8872.566 - 8922.978: 71.4793% ( 38) 00:07:34.681 8922.978 - 8973.391: 71.7208% ( 36) 00:07:34.681 8973.391 - 9023.803: 71.9957% ( 41) 00:07:34.681 9023.803 - 9074.215: 72.2103% ( 32) 00:07:34.681 9074.215 - 9124.628: 72.3914% ( 27) 00:07:34.681 9124.628 - 9175.040: 72.6462% ( 38) 00:07:34.681 9175.040 - 9225.452: 72.8608% ( 32) 00:07:34.681 9225.452 - 9275.865: 73.1156% ( 38) 00:07:34.681 9275.865 - 9326.277: 73.3637% ( 37) 00:07:34.681 9326.277 - 9376.689: 73.5984% ( 35) 00:07:34.681 9376.689 - 9427.102: 73.8130% ( 32) 00:07:34.681 9427.102 - 9477.514: 74.0679% ( 38) 00:07:34.681 9477.514 - 9527.926: 74.3495% ( 42) 00:07:34.681 9527.926 - 9578.338: 74.6043% ( 38) 00:07:34.681 9578.338 - 9628.751: 74.8659% ( 39) 00:07:34.681 9628.751 - 9679.163: 75.1006% ( 35) 00:07:34.681 9679.163 - 9729.575: 75.3755% ( 41) 00:07:34.681 9729.575 - 9779.988: 75.5834% ( 31) 00:07:34.681 9779.988 - 9830.400: 75.8718% ( 43) 00:07:34.681 9830.400 - 9880.812: 76.1400% ( 40) 00:07:34.681 9880.812 - 9931.225: 76.4217% ( 42) 00:07:34.681 9931.225 - 9981.637: 76.6832% ( 39) 00:07:34.681 9981.637 - 10032.049: 76.9649% ( 42) 00:07:34.681 10032.049 - 10082.462: 77.2264% ( 39) 00:07:34.681 10082.462 - 10132.874: 77.4611% ( 35) 00:07:34.681 10132.874 - 10183.286: 77.6757% ( 32) 00:07:34.681 10183.286 - 10233.698: 77.9238% ( 37) 00:07:34.681 10233.698 - 10284.111: 78.1451% ( 33) 00:07:34.681 10284.111 - 10334.523: 78.3463% ( 30) 00:07:34.681 10334.523 - 10384.935: 78.5408% ( 29) 00:07:34.681 10384.935 - 10435.348: 78.7084% ( 25) 00:07:34.681 10435.348 - 10485.760: 78.8962% ( 28) 00:07:34.681 10485.760 - 10536.172: 79.0840% ( 28) 00:07:34.681 10536.172 - 10586.585: 79.2315% ( 22) 00:07:34.681 10586.585 - 10636.997: 79.4058% ( 26) 00:07:34.681 10636.997 - 10687.409: 79.5534% ( 22) 00:07:34.681 10687.409 - 10737.822: 79.6741% ( 18) 00:07:34.681 10737.822 - 10788.234: 79.7613% ( 13) 00:07:34.681 10788.234 - 10838.646: 79.8484% ( 13) 00:07:34.681 10838.646 - 10889.058: 79.9423% ( 14) 00:07:34.681 10889.058 - 10939.471: 80.0764% ( 20) 00:07:34.681 10939.471 - 10989.883: 80.2709% ( 29) 00:07:34.681 10989.883 - 11040.295: 80.4185% ( 22) 00:07:34.681 11040.295 - 11090.708: 80.6129% ( 29) 00:07:34.681 11090.708 - 11141.120: 80.7538% ( 21) 00:07:34.681 11141.120 - 11191.532: 80.9415% ( 28) 00:07:34.681 11191.532 - 11241.945: 81.1226% ( 27) 00:07:34.681 11241.945 - 11292.357: 81.3305% ( 31) 00:07:34.681 11292.357 - 11342.769: 81.4981% ( 25) 00:07:34.681 11342.769 - 11393.182: 81.6658% ( 25) 00:07:34.681 11393.182 - 11443.594: 81.8401% ( 26) 00:07:34.681 11443.594 - 11494.006: 82.0011% ( 24) 00:07:34.681 11494.006 - 11544.418: 82.1687% ( 25) 00:07:34.681 11544.418 - 11594.831: 82.3230% ( 23) 00:07:34.681 11594.831 - 11645.243: 82.4705% ( 22) 00:07:34.681 11645.243 - 11695.655: 82.6381% ( 25) 00:07:34.681 11695.655 - 11746.068: 82.8058% ( 25) 00:07:34.681 11746.068 - 11796.480: 82.9734% ( 25) 00:07:34.681 11796.480 - 11846.892: 83.1344% ( 24) 00:07:34.681 11846.892 - 11897.305: 83.3087% ( 26) 00:07:34.681 11897.305 - 11947.717: 83.4831% ( 26) 00:07:34.681 11947.717 - 11998.129: 83.6642% ( 27) 00:07:34.681 11998.129 - 12048.542: 83.8519% ( 28) 00:07:34.681 12048.542 - 12098.954: 83.9995% ( 22) 00:07:34.681 12098.954 - 12149.366: 84.1269% ( 19) 00:07:34.681 12149.366 - 12199.778: 84.2476% ( 18) 00:07:34.681 12199.778 - 12250.191: 84.3616% ( 17) 00:07:34.681 12250.191 - 12300.603: 84.4689% ( 16) 00:07:34.681 12300.603 - 12351.015: 84.6030% ( 20) 00:07:34.681 12351.015 - 12401.428: 84.7103% ( 16) 00:07:34.681 12401.428 - 12451.840: 84.8444% ( 20) 00:07:34.681 12451.840 - 12502.252: 84.9450% ( 15) 00:07:34.681 12502.252 - 12552.665: 85.0322% ( 13) 00:07:34.681 12552.665 - 12603.077: 85.1328% ( 15) 00:07:34.681 12603.077 - 12653.489: 85.2267% ( 14) 00:07:34.681 12653.489 - 12703.902: 85.3071% ( 12) 00:07:34.681 12703.902 - 12754.314: 85.3742% ( 10) 00:07:34.681 12754.314 - 12804.726: 85.4211% ( 7) 00:07:34.681 12804.726 - 12855.138: 85.4815% ( 9) 00:07:34.681 12855.138 - 12905.551: 85.5150% ( 5) 00:07:34.681 12905.551 - 13006.375: 85.6089% ( 14) 00:07:34.681 13006.375 - 13107.200: 85.7497% ( 21) 00:07:34.681 13107.200 - 13208.025: 85.9040% ( 23) 00:07:34.681 13208.025 - 13308.849: 86.0515% ( 22) 00:07:34.681 13308.849 - 13409.674: 86.1923% ( 21) 00:07:34.681 13409.674 - 13510.498: 86.3197% ( 19) 00:07:34.681 13510.498 - 13611.323: 86.4807% ( 24) 00:07:34.681 13611.323 - 13712.148: 86.6685% ( 28) 00:07:34.681 13712.148 - 13812.972: 86.8898% ( 33) 00:07:34.681 13812.972 - 13913.797: 87.2116% ( 48) 00:07:34.681 13913.797 - 14014.622: 87.4799% ( 40) 00:07:34.681 14014.622 - 14115.446: 87.7414% ( 39) 00:07:34.681 14115.446 - 14216.271: 87.9426% ( 30) 00:07:34.681 14216.271 - 14317.095: 88.1773% ( 35) 00:07:34.681 14317.095 - 14417.920: 88.4523% ( 41) 00:07:34.681 14417.920 - 14518.745: 88.7272% ( 41) 00:07:34.681 14518.745 - 14619.569: 88.9954% ( 40) 00:07:34.681 14619.569 - 14720.394: 89.2905% ( 44) 00:07:34.681 14720.394 - 14821.218: 89.6124% ( 48) 00:07:34.681 14821.218 - 14922.043: 89.9343% ( 48) 00:07:34.681 14922.043 - 15022.868: 90.2696% ( 50) 00:07:34.681 15022.868 - 15123.692: 90.6049% ( 50) 00:07:34.681 15123.692 - 15224.517: 90.9804% ( 56) 00:07:34.681 15224.517 - 15325.342: 91.3358% ( 53) 00:07:34.681 15325.342 - 15426.166: 91.7114% ( 56) 00:07:34.681 15426.166 - 15526.991: 92.0735% ( 54) 00:07:34.681 15526.991 - 15627.815: 92.4155% ( 51) 00:07:34.681 15627.815 - 15728.640: 92.6636% ( 37) 00:07:34.681 15728.640 - 15829.465: 92.8916% ( 34) 00:07:34.681 15829.465 - 15930.289: 93.0995% ( 31) 00:07:34.681 15930.289 - 16031.114: 93.3342% ( 35) 00:07:34.681 16031.114 - 16131.938: 93.5555% ( 33) 00:07:34.681 16131.938 - 16232.763: 93.7567% ( 30) 00:07:34.681 16232.763 - 16333.588: 93.9378% ( 27) 00:07:34.681 16333.588 - 16434.412: 94.1389% ( 30) 00:07:34.681 16434.412 - 16535.237: 94.3066% ( 25) 00:07:34.681 16535.237 - 16636.062: 94.5883% ( 42) 00:07:34.681 16636.062 - 16736.886: 94.8364% ( 37) 00:07:34.681 16736.886 - 16837.711: 95.1046% ( 40) 00:07:34.681 16837.711 - 16938.535: 95.4332% ( 49) 00:07:34.681 16938.535 - 17039.360: 95.8020% ( 55) 00:07:34.681 17039.360 - 17140.185: 96.1105% ( 46) 00:07:34.681 17140.185 - 17241.009: 96.3788% ( 40) 00:07:34.681 17241.009 - 17341.834: 96.6269% ( 37) 00:07:34.681 17341.834 - 17442.658: 96.8750% ( 37) 00:07:34.681 17442.658 - 17543.483: 97.1164% ( 36) 00:07:34.681 17543.483 - 17644.308: 97.3578% ( 36) 00:07:34.681 17644.308 - 17745.132: 97.5724% ( 32) 00:07:34.681 17745.132 - 17845.957: 97.7736% ( 30) 00:07:34.681 17845.957 - 17946.782: 97.9144% ( 21) 00:07:34.681 17946.782 - 18047.606: 98.0217% ( 16) 00:07:34.681 18047.606 - 18148.431: 98.1223% ( 15) 00:07:34.681 18148.431 - 18249.255: 98.1827% ( 9) 00:07:34.681 18249.255 - 18350.080: 98.2363% ( 8) 00:07:34.681 18350.080 - 18450.905: 98.2900% ( 8) 00:07:34.681 18450.905 - 18551.729: 98.3369% ( 7) 00:07:34.681 18551.729 - 18652.554: 98.3906% ( 8) 00:07:34.681 18652.554 - 18753.378: 98.4442% ( 8) 00:07:34.681 18753.378 - 18854.203: 98.4979% ( 8) 00:07:34.681 18854.203 - 18955.028: 98.5448% ( 7) 00:07:34.681 18955.028 - 19055.852: 98.6052% ( 9) 00:07:34.681 19055.852 - 19156.677: 98.7460% ( 21) 00:07:34.681 19156.677 - 19257.502: 98.8264% ( 12) 00:07:34.681 19257.502 - 19358.326: 98.8935% ( 10) 00:07:34.681 19358.326 - 19459.151: 98.9337% ( 6) 00:07:34.681 19459.151 - 19559.975: 98.9673% ( 5) 00:07:34.681 19559.975 - 19660.800: 99.0008% ( 5) 00:07:34.681 19660.800 - 19761.625: 99.0410% ( 6) 00:07:34.681 19761.625 - 19862.449: 99.0746% ( 5) 00:07:34.681 19862.449 - 19963.274: 99.1081% ( 5) 00:07:34.681 19963.274 - 20064.098: 99.1416% ( 5) 00:07:34.681 23189.662 - 23290.486: 99.1483% ( 1) 00:07:34.681 23290.486 - 23391.311: 99.1752% ( 4) 00:07:34.681 23391.311 - 23492.135: 99.2020% ( 4) 00:07:34.681 23492.135 - 23592.960: 99.2221% ( 3) 00:07:34.681 23592.960 - 23693.785: 99.2489% ( 4) 00:07:34.681 23693.785 - 23794.609: 99.2758% ( 4) 00:07:34.681 23794.609 - 23895.434: 99.2959% ( 3) 00:07:34.681 23895.434 - 23996.258: 99.3227% ( 4) 00:07:34.681 23996.258 - 24097.083: 99.3495% ( 4) 00:07:34.681 24097.083 - 24197.908: 99.3763% ( 4) 00:07:34.681 24197.908 - 24298.732: 99.4032% ( 4) 00:07:34.681 24298.732 - 24399.557: 99.4300% ( 4) 00:07:34.681 24399.557 - 24500.382: 99.4501% ( 3) 00:07:34.681 24500.382 - 24601.206: 99.4769% ( 4) 00:07:34.681 24601.206 - 24702.031: 99.5038% ( 4) 00:07:34.681 24702.031 - 24802.855: 99.5239% ( 3) 00:07:34.681 24802.855 - 24903.680: 99.5507% ( 4) 00:07:34.681 24903.680 - 25004.505: 99.5708% ( 3) 00:07:34.681 29642.437 - 29844.086: 99.6111% ( 6) 00:07:34.681 29844.086 - 30045.735: 99.6513% ( 6) 00:07:34.682 30045.735 - 30247.385: 99.6982% ( 7) 00:07:34.682 30247.385 - 30449.034: 99.7519% ( 8) 00:07:34.682 30449.034 - 30650.683: 99.7988% ( 7) 00:07:34.682 30650.683 - 30852.332: 99.8525% ( 8) 00:07:34.682 30852.332 - 31053.982: 99.8994% ( 7) 00:07:34.682 31053.982 - 31255.631: 99.9464% ( 7) 00:07:34.682 31255.631 - 31457.280: 100.0000% ( 8) 00:07:34.682 00:07:34.682 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.682 ============================================================================== 00:07:34.682 Range in us Cumulative IO count 00:07:34.682 5696.591 - 5721.797: 0.0134% ( 2) 00:07:34.682 5721.797 - 5747.003: 0.0335% ( 3) 00:07:34.682 5747.003 - 5772.209: 0.0671% ( 5) 00:07:34.682 5772.209 - 5797.415: 0.1408% ( 11) 00:07:34.682 5797.415 - 5822.622: 0.2884% ( 22) 00:07:34.682 5822.622 - 5847.828: 0.5097% ( 33) 00:07:34.682 5847.828 - 5873.034: 0.8450% ( 50) 00:07:34.682 5873.034 - 5898.240: 1.4887% ( 96) 00:07:34.682 5898.240 - 5923.446: 2.1660% ( 101) 00:07:34.682 5923.446 - 5948.652: 2.9506% ( 117) 00:07:34.682 5948.652 - 5973.858: 3.8694% ( 137) 00:07:34.682 5973.858 - 5999.065: 4.8954% ( 153) 00:07:34.682 5999.065 - 6024.271: 6.0555% ( 173) 00:07:34.682 6024.271 - 6049.477: 7.0547% ( 149) 00:07:34.682 6049.477 - 6074.683: 8.2551% ( 179) 00:07:34.682 6074.683 - 6099.889: 9.5762% ( 197) 00:07:34.682 6099.889 - 6125.095: 10.9979% ( 212) 00:07:34.682 6125.095 - 6150.302: 12.4799% ( 221) 00:07:34.682 6150.302 - 6175.508: 13.9552% ( 220) 00:07:34.682 6175.508 - 6200.714: 15.4305% ( 220) 00:07:34.682 6200.714 - 6225.920: 16.9796% ( 231) 00:07:34.682 6225.920 - 6251.126: 18.5019% ( 227) 00:07:34.682 6251.126 - 6276.332: 20.0040% ( 224) 00:07:34.682 6276.332 - 6301.538: 21.6269% ( 242) 00:07:34.682 6301.538 - 6326.745: 23.2833% ( 247) 00:07:34.682 6326.745 - 6351.951: 24.9732% ( 252) 00:07:34.682 6351.951 - 6377.157: 26.7637% ( 267) 00:07:34.682 6377.157 - 6402.363: 28.5274% ( 263) 00:07:34.682 6402.363 - 6427.569: 30.3045% ( 265) 00:07:34.682 6427.569 - 6452.775: 32.0681% ( 263) 00:07:34.682 6452.775 - 6503.188: 35.5217% ( 515) 00:07:34.682 6503.188 - 6553.600: 39.0424% ( 525) 00:07:34.682 6553.600 - 6604.012: 42.6435% ( 537) 00:07:34.682 6604.012 - 6654.425: 46.0166% ( 503) 00:07:34.682 6654.425 - 6704.837: 49.0813% ( 457) 00:07:34.682 6704.837 - 6755.249: 51.7905% ( 404) 00:07:34.682 6755.249 - 6805.662: 54.0638% ( 339) 00:07:34.682 6805.662 - 6856.074: 56.0421% ( 295) 00:07:34.682 6856.074 - 6906.486: 57.6046% ( 233) 00:07:34.682 6906.486 - 6956.898: 58.8050% ( 179) 00:07:34.682 6956.898 - 7007.311: 59.8042% ( 149) 00:07:34.682 7007.311 - 7057.723: 60.6156% ( 121) 00:07:34.682 7057.723 - 7108.135: 61.3399% ( 108) 00:07:34.682 7108.135 - 7158.548: 61.8898% ( 82) 00:07:34.682 7158.548 - 7208.960: 62.3458% ( 68) 00:07:34.682 7208.960 - 7259.372: 62.7146% ( 55) 00:07:34.682 7259.372 - 7309.785: 63.0432% ( 49) 00:07:34.682 7309.785 - 7360.197: 63.4254% ( 57) 00:07:34.682 7360.197 - 7410.609: 63.8881% ( 69) 00:07:34.682 7410.609 - 7461.022: 64.3039% ( 62) 00:07:34.682 7461.022 - 7511.434: 64.5990% ( 44) 00:07:34.682 7511.434 - 7561.846: 64.9209% ( 48) 00:07:34.682 7561.846 - 7612.258: 65.2092% ( 43) 00:07:34.682 7612.258 - 7662.671: 65.5110% ( 45) 00:07:34.682 7662.671 - 7713.083: 65.7792% ( 40) 00:07:34.682 7713.083 - 7763.495: 66.0341% ( 38) 00:07:34.682 7763.495 - 7813.908: 66.2889% ( 38) 00:07:34.682 7813.908 - 7864.320: 66.5571% ( 40) 00:07:34.682 7864.320 - 7914.732: 66.7851% ( 34) 00:07:34.682 7914.732 - 7965.145: 67.0333% ( 37) 00:07:34.682 7965.145 - 8015.557: 67.2881% ( 38) 00:07:34.682 8015.557 - 8065.969: 67.5295% ( 36) 00:07:34.682 8065.969 - 8116.382: 67.7709% ( 36) 00:07:34.682 8116.382 - 8166.794: 68.0258% ( 38) 00:07:34.682 8166.794 - 8217.206: 68.2470% ( 33) 00:07:34.682 8217.206 - 8267.618: 68.4616% ( 32) 00:07:34.682 8267.618 - 8318.031: 68.6561% ( 29) 00:07:34.682 8318.031 - 8368.443: 68.8238% ( 25) 00:07:34.682 8368.443 - 8418.855: 69.1121% ( 43) 00:07:34.682 8418.855 - 8469.268: 69.3267% ( 32) 00:07:34.682 8469.268 - 8519.680: 69.5011% ( 26) 00:07:34.682 8519.680 - 8570.092: 69.8095% ( 46) 00:07:34.682 8570.092 - 8620.505: 70.0107% ( 30) 00:07:34.682 8620.505 - 8670.917: 70.2119% ( 30) 00:07:34.682 8670.917 - 8721.329: 70.4802% ( 40) 00:07:34.682 8721.329 - 8771.742: 70.7484% ( 40) 00:07:34.682 8771.742 - 8822.154: 71.0837% ( 50) 00:07:34.682 8822.154 - 8872.566: 71.3855% ( 45) 00:07:34.682 8872.566 - 8922.978: 71.6336% ( 37) 00:07:34.682 8922.978 - 8973.391: 71.8750% ( 36) 00:07:34.682 8973.391 - 9023.803: 72.1499% ( 41) 00:07:34.682 9023.803 - 9074.215: 72.4785% ( 49) 00:07:34.682 9074.215 - 9124.628: 72.7401% ( 39) 00:07:34.682 9124.628 - 9175.040: 72.9815% ( 36) 00:07:34.682 9175.040 - 9225.452: 73.2564% ( 41) 00:07:34.682 9225.452 - 9275.865: 73.5314% ( 41) 00:07:34.682 9275.865 - 9326.277: 73.8130% ( 42) 00:07:34.682 9326.277 - 9376.689: 74.0746% ( 39) 00:07:34.682 9376.689 - 9427.102: 74.2489% ( 26) 00:07:34.682 9427.102 - 9477.514: 74.4300% ( 27) 00:07:34.682 9477.514 - 9527.926: 74.6312% ( 30) 00:07:34.682 9527.926 - 9578.338: 74.8458% ( 32) 00:07:34.682 9578.338 - 9628.751: 75.0536% ( 31) 00:07:34.682 9628.751 - 9679.163: 75.2884% ( 35) 00:07:34.682 9679.163 - 9729.575: 75.4962% ( 31) 00:07:34.682 9729.575 - 9779.988: 75.7980% ( 45) 00:07:34.682 9779.988 - 9830.400: 76.1199% ( 48) 00:07:34.682 9830.400 - 9880.812: 76.3345% ( 32) 00:07:34.682 9880.812 - 9931.225: 76.5491% ( 32) 00:07:34.682 9931.225 - 9981.637: 76.7302% ( 27) 00:07:34.682 9981.637 - 10032.049: 76.9313% ( 30) 00:07:34.682 10032.049 - 10082.462: 77.1593% ( 34) 00:07:34.682 10082.462 - 10132.874: 77.3538% ( 29) 00:07:34.682 10132.874 - 10183.286: 77.5550% ( 30) 00:07:34.682 10183.286 - 10233.698: 77.8031% ( 37) 00:07:34.682 10233.698 - 10284.111: 78.0982% ( 44) 00:07:34.682 10284.111 - 10334.523: 78.3463% ( 37) 00:07:34.682 10334.523 - 10384.935: 78.5475% ( 30) 00:07:34.682 10384.935 - 10435.348: 78.7352% ( 28) 00:07:34.682 10435.348 - 10485.760: 78.9968% ( 39) 00:07:34.682 10485.760 - 10536.172: 79.1980% ( 30) 00:07:34.682 10536.172 - 10586.585: 79.3723% ( 26) 00:07:34.682 10586.585 - 10636.997: 79.5869% ( 32) 00:07:34.682 10636.997 - 10687.409: 79.8149% ( 34) 00:07:34.682 10687.409 - 10737.822: 80.0161% ( 30) 00:07:34.682 10737.822 - 10788.234: 80.2374% ( 33) 00:07:34.682 10788.234 - 10838.646: 80.4319% ( 29) 00:07:34.682 10838.646 - 10889.058: 80.6330% ( 30) 00:07:34.682 10889.058 - 10939.471: 80.8208% ( 28) 00:07:34.682 10939.471 - 10989.883: 81.0287% ( 31) 00:07:34.682 10989.883 - 11040.295: 81.2031% ( 26) 00:07:34.682 11040.295 - 11090.708: 81.3975% ( 29) 00:07:34.682 11090.708 - 11141.120: 81.5987% ( 30) 00:07:34.682 11141.120 - 11191.532: 81.7865% ( 28) 00:07:34.682 11191.532 - 11241.945: 81.9675% ( 27) 00:07:34.682 11241.945 - 11292.357: 82.0883% ( 18) 00:07:34.682 11292.357 - 11342.769: 82.2291% ( 21) 00:07:34.682 11342.769 - 11393.182: 82.3632% ( 20) 00:07:34.682 11393.182 - 11443.594: 82.4973% ( 20) 00:07:34.682 11443.594 - 11494.006: 82.6381% ( 21) 00:07:34.682 11494.006 - 11544.418: 82.7857% ( 22) 00:07:34.682 11544.418 - 11594.831: 82.9198% ( 20) 00:07:34.682 11594.831 - 11645.243: 83.0405% ( 18) 00:07:34.682 11645.243 - 11695.655: 83.1277% ( 13) 00:07:34.682 11695.655 - 11746.068: 83.2417% ( 17) 00:07:34.682 11746.068 - 11796.480: 83.3490% ( 16) 00:07:34.682 11796.480 - 11846.892: 83.4496% ( 15) 00:07:34.682 11846.892 - 11897.305: 83.5569% ( 16) 00:07:34.682 11897.305 - 11947.717: 83.6709% ( 17) 00:07:34.682 11947.717 - 11998.129: 83.7849% ( 17) 00:07:34.682 11998.129 - 12048.542: 83.9190% ( 20) 00:07:34.682 12048.542 - 12098.954: 84.1135% ( 29) 00:07:34.682 12098.954 - 12149.366: 84.2275% ( 17) 00:07:34.682 12149.366 - 12199.778: 84.3012% ( 11) 00:07:34.682 12199.778 - 12250.191: 84.3549% ( 8) 00:07:34.682 12250.191 - 12300.603: 84.3951% ( 6) 00:07:34.682 12300.603 - 12351.015: 84.4286% ( 5) 00:07:34.682 12351.015 - 12401.428: 84.4823% ( 8) 00:07:34.682 12401.428 - 12451.840: 84.5359% ( 8) 00:07:34.682 12451.840 - 12502.252: 84.6231% ( 13) 00:07:34.682 12502.252 - 12552.665: 84.6969% ( 11) 00:07:34.682 12552.665 - 12603.077: 84.7438% ( 7) 00:07:34.682 12603.077 - 12653.489: 84.8109% ( 10) 00:07:34.682 12653.489 - 12703.902: 84.8780% ( 10) 00:07:34.682 12703.902 - 12754.314: 84.9383% ( 9) 00:07:34.682 12754.314 - 12804.726: 85.0255% ( 13) 00:07:34.682 12804.726 - 12855.138: 85.1060% ( 12) 00:07:34.682 12855.138 - 12905.551: 85.1596% ( 8) 00:07:34.682 12905.551 - 13006.375: 85.3608% ( 30) 00:07:34.682 13006.375 - 13107.200: 85.5217% ( 24) 00:07:34.682 13107.200 - 13208.025: 85.6961% ( 26) 00:07:34.682 13208.025 - 13308.849: 85.8973% ( 30) 00:07:34.682 13308.849 - 13409.674: 86.1119% ( 32) 00:07:34.682 13409.674 - 13510.498: 86.3264% ( 32) 00:07:34.682 13510.498 - 13611.323: 86.6282% ( 45) 00:07:34.682 13611.323 - 13712.148: 86.8428% ( 32) 00:07:34.682 13712.148 - 13812.972: 87.0976% ( 38) 00:07:34.682 13812.972 - 13913.797: 87.3391% ( 36) 00:07:34.682 13913.797 - 14014.622: 87.6006% ( 39) 00:07:34.682 14014.622 - 14115.446: 87.9024% ( 45) 00:07:34.682 14115.446 - 14216.271: 88.2041% ( 45) 00:07:34.682 14216.271 - 14317.095: 88.4791% ( 41) 00:07:34.682 14317.095 - 14417.920: 88.7473% ( 40) 00:07:34.682 14417.920 - 14518.745: 88.9753% ( 34) 00:07:34.682 14518.745 - 14619.569: 89.2369% ( 39) 00:07:34.682 14619.569 - 14720.394: 89.5252% ( 43) 00:07:34.683 14720.394 - 14821.218: 89.8471% ( 48) 00:07:34.683 14821.218 - 14922.043: 90.1958% ( 52) 00:07:34.683 14922.043 - 15022.868: 90.5915% ( 59) 00:07:34.683 15022.868 - 15123.692: 91.0609% ( 70) 00:07:34.683 15123.692 - 15224.517: 91.4633% ( 60) 00:07:34.683 15224.517 - 15325.342: 91.7650% ( 45) 00:07:34.683 15325.342 - 15426.166: 92.0869% ( 48) 00:07:34.683 15426.166 - 15526.991: 92.3552% ( 40) 00:07:34.683 15526.991 - 15627.815: 92.6502% ( 44) 00:07:34.683 15627.815 - 15728.640: 92.9386% ( 43) 00:07:34.683 15728.640 - 15829.465: 93.2202% ( 42) 00:07:34.683 15829.465 - 15930.289: 93.4683% ( 37) 00:07:34.683 15930.289 - 16031.114: 93.7165% ( 37) 00:07:34.683 16031.114 - 16131.938: 93.9445% ( 34) 00:07:34.683 16131.938 - 16232.763: 94.0853% ( 21) 00:07:34.683 16232.763 - 16333.588: 94.2194% ( 20) 00:07:34.683 16333.588 - 16434.412: 94.3938% ( 26) 00:07:34.683 16434.412 - 16535.237: 94.5614% ( 25) 00:07:34.683 16535.237 - 16636.062: 94.7626% ( 30) 00:07:34.683 16636.062 - 16736.886: 94.9034% ( 21) 00:07:34.683 16736.886 - 16837.711: 95.0644% ( 24) 00:07:34.683 16837.711 - 16938.535: 95.2723% ( 31) 00:07:34.683 16938.535 - 17039.360: 95.5204% ( 37) 00:07:34.683 17039.360 - 17140.185: 95.7819% ( 39) 00:07:34.683 17140.185 - 17241.009: 96.0300% ( 37) 00:07:34.683 17241.009 - 17341.834: 96.2715% ( 36) 00:07:34.683 17341.834 - 17442.658: 96.5263% ( 38) 00:07:34.683 17442.658 - 17543.483: 96.7476% ( 33) 00:07:34.683 17543.483 - 17644.308: 96.9622% ( 32) 00:07:34.683 17644.308 - 17745.132: 97.1768% ( 32) 00:07:34.683 17745.132 - 17845.957: 97.3847% ( 31) 00:07:34.683 17845.957 - 17946.782: 97.5657% ( 27) 00:07:34.683 17946.782 - 18047.606: 97.7535% ( 28) 00:07:34.683 18047.606 - 18148.431: 97.8809% ( 19) 00:07:34.683 18148.431 - 18249.255: 97.9547% ( 11) 00:07:34.683 18249.255 - 18350.080: 98.0217% ( 10) 00:07:34.683 18350.080 - 18450.905: 98.0821% ( 9) 00:07:34.683 18450.905 - 18551.729: 98.1357% ( 8) 00:07:34.683 18551.729 - 18652.554: 98.1760% ( 6) 00:07:34.683 18652.554 - 18753.378: 98.2229% ( 7) 00:07:34.683 18753.378 - 18854.203: 98.2631% ( 6) 00:07:34.683 18854.203 - 18955.028: 98.3101% ( 7) 00:07:34.683 18955.028 - 19055.852: 98.3570% ( 7) 00:07:34.683 19055.852 - 19156.677: 98.4040% ( 7) 00:07:34.683 19156.677 - 19257.502: 98.4509% ( 7) 00:07:34.683 19257.502 - 19358.326: 98.4911% ( 6) 00:07:34.683 19358.326 - 19459.151: 98.5381% ( 7) 00:07:34.683 19459.151 - 19559.975: 98.5783% ( 6) 00:07:34.683 19559.975 - 19660.800: 98.6387% ( 9) 00:07:34.683 19660.800 - 19761.625: 98.7124% ( 11) 00:07:34.683 19761.625 - 19862.449: 98.7795% ( 10) 00:07:34.683 19862.449 - 19963.274: 98.8332% ( 8) 00:07:34.683 19963.274 - 20064.098: 98.8801% ( 7) 00:07:34.683 20064.098 - 20164.923: 98.9136% ( 5) 00:07:34.683 20164.923 - 20265.748: 98.9539% ( 6) 00:07:34.683 20265.748 - 20366.572: 98.9941% ( 6) 00:07:34.683 20366.572 - 20467.397: 99.0343% ( 6) 00:07:34.683 20467.397 - 20568.222: 99.0679% ( 5) 00:07:34.683 20568.222 - 20669.046: 99.1081% ( 6) 00:07:34.683 20669.046 - 20769.871: 99.1349% ( 4) 00:07:34.683 20769.871 - 20870.695: 99.1416% ( 1) 00:07:34.683 22282.240 - 22383.065: 99.1483% ( 1) 00:07:34.683 22383.065 - 22483.889: 99.1550% ( 1) 00:07:34.683 22483.889 - 22584.714: 99.1752% ( 3) 00:07:34.683 22584.714 - 22685.538: 99.1953% ( 3) 00:07:34.683 22685.538 - 22786.363: 99.2154% ( 3) 00:07:34.683 22786.363 - 22887.188: 99.2355% ( 3) 00:07:34.683 22887.188 - 22988.012: 99.2556% ( 3) 00:07:34.683 22988.012 - 23088.837: 99.2758% ( 3) 00:07:34.683 23088.837 - 23189.662: 99.2959% ( 3) 00:07:34.683 23189.662 - 23290.486: 99.3160% ( 3) 00:07:34.683 23290.486 - 23391.311: 99.3361% ( 3) 00:07:34.683 23391.311 - 23492.135: 99.3562% ( 3) 00:07:34.683 23492.135 - 23592.960: 99.3830% ( 4) 00:07:34.683 23592.960 - 23693.785: 99.4099% ( 4) 00:07:34.683 23693.785 - 23794.609: 99.4300% ( 3) 00:07:34.683 23794.609 - 23895.434: 99.4568% ( 4) 00:07:34.683 23895.434 - 23996.258: 99.4836% ( 4) 00:07:34.683 23996.258 - 24097.083: 99.5038% ( 3) 00:07:34.683 24097.083 - 24197.908: 99.5306% ( 4) 00:07:34.683 24197.908 - 24298.732: 99.5574% ( 4) 00:07:34.683 24298.732 - 24399.557: 99.5708% ( 2) 00:07:34.683 28835.840 - 29037.489: 99.5842% ( 2) 00:07:34.683 29037.489 - 29239.138: 99.6245% ( 6) 00:07:34.683 29239.138 - 29440.788: 99.6647% ( 6) 00:07:34.683 29440.788 - 29642.437: 99.7049% ( 6) 00:07:34.683 29642.437 - 29844.086: 99.7519% ( 7) 00:07:34.683 29844.086 - 30045.735: 99.7921% ( 6) 00:07:34.683 30045.735 - 30247.385: 99.8391% ( 7) 00:07:34.683 30247.385 - 30449.034: 99.8793% ( 6) 00:07:34.683 30449.034 - 30650.683: 99.9262% ( 7) 00:07:34.683 30650.683 - 30852.332: 99.9665% ( 6) 00:07:34.683 30852.332 - 31053.982: 100.0000% ( 5) 00:07:34.683 00:07:34.683 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.683 ============================================================================== 00:07:34.683 Range in us Cumulative IO count 00:07:34.683 5696.591 - 5721.797: 0.0067% ( 1) 00:07:34.683 5721.797 - 5747.003: 0.0402% ( 5) 00:07:34.683 5747.003 - 5772.209: 0.1073% ( 10) 00:07:34.683 5772.209 - 5797.415: 0.1811% ( 11) 00:07:34.683 5797.415 - 5822.622: 0.3688% ( 28) 00:07:34.683 5822.622 - 5847.828: 0.5566% ( 28) 00:07:34.683 5847.828 - 5873.034: 0.8450% ( 43) 00:07:34.683 5873.034 - 5898.240: 1.3881% ( 81) 00:07:34.683 5898.240 - 5923.446: 1.9179% ( 79) 00:07:34.683 5923.446 - 5948.652: 2.6489% ( 109) 00:07:34.683 5948.652 - 5973.858: 3.5341% ( 132) 00:07:34.683 5973.858 - 5999.065: 4.3857% ( 127) 00:07:34.683 5999.065 - 6024.271: 5.5258% ( 170) 00:07:34.683 6024.271 - 6049.477: 6.7798% ( 187) 00:07:34.683 6049.477 - 6074.683: 8.2953% ( 226) 00:07:34.683 6074.683 - 6099.889: 9.5963% ( 194) 00:07:34.683 6099.889 - 6125.095: 10.9174% ( 197) 00:07:34.683 6125.095 - 6150.302: 12.3323% ( 211) 00:07:34.683 6150.302 - 6175.508: 13.8948% ( 233) 00:07:34.683 6175.508 - 6200.714: 15.3769% ( 221) 00:07:34.683 6200.714 - 6225.920: 16.8857% ( 225) 00:07:34.683 6225.920 - 6251.126: 18.4348% ( 231) 00:07:34.683 6251.126 - 6276.332: 20.0510% ( 241) 00:07:34.683 6276.332 - 6301.538: 21.6872% ( 244) 00:07:34.683 6301.538 - 6326.745: 23.2564% ( 234) 00:07:34.683 6326.745 - 6351.951: 24.9598% ( 254) 00:07:34.683 6351.951 - 6377.157: 26.6698% ( 255) 00:07:34.683 6377.157 - 6402.363: 28.4469% ( 265) 00:07:34.683 6402.363 - 6427.569: 30.3380% ( 282) 00:07:34.683 6427.569 - 6452.775: 32.1352% ( 268) 00:07:34.683 6452.775 - 6503.188: 35.6223% ( 520) 00:07:34.683 6503.188 - 6553.600: 39.0759% ( 515) 00:07:34.683 6553.600 - 6604.012: 42.6703% ( 536) 00:07:34.683 6604.012 - 6654.425: 46.1306% ( 516) 00:07:34.683 6654.425 - 6704.837: 49.1953% ( 457) 00:07:34.683 6704.837 - 6755.249: 51.9582% ( 412) 00:07:34.683 6755.249 - 6805.662: 54.2918% ( 348) 00:07:34.683 6805.662 - 6856.074: 56.1762% ( 281) 00:07:34.683 6856.074 - 6906.486: 57.6918% ( 226) 00:07:34.683 6906.486 - 6956.898: 58.9257% ( 184) 00:07:34.683 6956.898 - 7007.311: 59.9651% ( 155) 00:07:34.683 7007.311 - 7057.723: 60.8034% ( 125) 00:07:34.683 7057.723 - 7108.135: 61.5477% ( 111) 00:07:34.683 7108.135 - 7158.548: 62.1043% ( 83) 00:07:34.683 7158.548 - 7208.960: 62.5671% ( 69) 00:07:34.683 7208.960 - 7259.372: 63.0834% ( 77) 00:07:34.683 7259.372 - 7309.785: 63.5193% ( 65) 00:07:34.683 7309.785 - 7360.197: 63.8814% ( 54) 00:07:34.683 7360.197 - 7410.609: 64.2033% ( 48) 00:07:34.683 7410.609 - 7461.022: 64.5118% ( 46) 00:07:34.683 7461.022 - 7511.434: 64.8404% ( 49) 00:07:34.683 7511.434 - 7561.846: 65.1422% ( 45) 00:07:34.683 7561.846 - 7612.258: 65.3769% ( 35) 00:07:34.683 7612.258 - 7662.671: 65.5848% ( 31) 00:07:34.683 7662.671 - 7713.083: 65.8798% ( 44) 00:07:34.683 7713.083 - 7763.495: 66.1548% ( 41) 00:07:34.683 7763.495 - 7813.908: 66.3962% ( 36) 00:07:34.683 7813.908 - 7864.320: 66.6309% ( 35) 00:07:34.683 7864.320 - 7914.732: 66.8991% ( 40) 00:07:34.684 7914.732 - 7965.145: 67.1808% ( 42) 00:07:34.684 7965.145 - 8015.557: 67.4088% ( 34) 00:07:34.684 8015.557 - 8065.969: 67.6502% ( 36) 00:07:34.684 8065.969 - 8116.382: 67.8715% ( 33) 00:07:34.684 8116.382 - 8166.794: 68.0593% ( 28) 00:07:34.684 8166.794 - 8217.206: 68.2739% ( 32) 00:07:34.684 8217.206 - 8267.618: 68.5153% ( 36) 00:07:34.684 8267.618 - 8318.031: 68.7165% ( 30) 00:07:34.684 8318.031 - 8368.443: 68.9445% ( 34) 00:07:34.684 8368.443 - 8418.855: 69.1792% ( 35) 00:07:34.684 8418.855 - 8469.268: 69.4139% ( 35) 00:07:34.684 8469.268 - 8519.680: 69.6218% ( 31) 00:07:34.684 8519.680 - 8570.092: 69.8163% ( 29) 00:07:34.684 8570.092 - 8620.505: 70.1381% ( 48) 00:07:34.684 8620.505 - 8670.917: 70.3796% ( 36) 00:07:34.684 8670.917 - 8721.329: 70.6411% ( 39) 00:07:34.684 8721.329 - 8771.742: 70.9831% ( 51) 00:07:34.684 8771.742 - 8822.154: 71.2379% ( 38) 00:07:34.684 8822.154 - 8872.566: 71.5263% ( 43) 00:07:34.684 8872.566 - 8922.978: 71.8146% ( 43) 00:07:34.684 8922.978 - 8973.391: 72.1097% ( 44) 00:07:34.684 8973.391 - 9023.803: 72.3847% ( 41) 00:07:34.684 9023.803 - 9074.215: 72.6864% ( 45) 00:07:34.684 9074.215 - 9124.628: 72.9681% ( 42) 00:07:34.684 9124.628 - 9175.040: 73.2564% ( 43) 00:07:34.684 9175.040 - 9225.452: 73.5515% ( 44) 00:07:34.684 9225.452 - 9275.865: 73.9472% ( 59) 00:07:34.684 9275.865 - 9326.277: 74.2422% ( 44) 00:07:34.684 9326.277 - 9376.689: 74.4769% ( 35) 00:07:34.684 9376.689 - 9427.102: 74.7183% ( 36) 00:07:34.684 9427.102 - 9477.514: 74.9531% ( 35) 00:07:34.684 9477.514 - 9527.926: 75.1542% ( 30) 00:07:34.684 9527.926 - 9578.338: 75.3219% ( 25) 00:07:34.684 9578.338 - 9628.751: 75.4828% ( 24) 00:07:34.684 9628.751 - 9679.163: 75.6371% ( 23) 00:07:34.684 9679.163 - 9729.575: 75.7980% ( 24) 00:07:34.684 9729.575 - 9779.988: 75.9523% ( 23) 00:07:34.684 9779.988 - 9830.400: 76.1668% ( 32) 00:07:34.684 9830.400 - 9880.812: 76.3479% ( 27) 00:07:34.684 9880.812 - 9931.225: 76.6027% ( 38) 00:07:34.684 9931.225 - 9981.637: 76.8777% ( 41) 00:07:34.684 9981.637 - 10032.049: 77.1124% ( 35) 00:07:34.684 10032.049 - 10082.462: 77.3471% ( 35) 00:07:34.684 10082.462 - 10132.874: 77.5952% ( 37) 00:07:34.684 10132.874 - 10183.286: 77.8031% ( 31) 00:07:34.684 10183.286 - 10233.698: 78.0646% ( 39) 00:07:34.684 10233.698 - 10284.111: 78.3262% ( 39) 00:07:34.684 10284.111 - 10334.523: 78.5743% ( 37) 00:07:34.684 10334.523 - 10384.935: 78.7956% ( 33) 00:07:34.684 10384.935 - 10435.348: 78.9968% ( 30) 00:07:34.684 10435.348 - 10485.760: 79.1644% ( 25) 00:07:34.684 10485.760 - 10536.172: 79.3321% ( 25) 00:07:34.684 10536.172 - 10586.585: 79.5266% ( 29) 00:07:34.684 10586.585 - 10636.997: 79.7546% ( 34) 00:07:34.684 10636.997 - 10687.409: 79.9289% ( 26) 00:07:34.684 10687.409 - 10737.822: 80.1301% ( 30) 00:07:34.684 10737.822 - 10788.234: 80.3112% ( 27) 00:07:34.684 10788.234 - 10838.646: 80.5459% ( 35) 00:07:34.684 10838.646 - 10889.058: 80.7672% ( 33) 00:07:34.684 10889.058 - 10939.471: 80.9482% ( 27) 00:07:34.684 10939.471 - 10989.883: 81.1427% ( 29) 00:07:34.684 10989.883 - 11040.295: 81.3372% ( 29) 00:07:34.684 11040.295 - 11090.708: 81.5182% ( 27) 00:07:34.684 11090.708 - 11141.120: 81.6725% ( 23) 00:07:34.684 11141.120 - 11191.532: 81.8602% ( 28) 00:07:34.684 11191.532 - 11241.945: 82.0145% ( 23) 00:07:34.684 11241.945 - 11292.357: 82.2023% ( 28) 00:07:34.684 11292.357 - 11342.769: 82.3900% ( 28) 00:07:34.684 11342.769 - 11393.182: 82.5912% ( 30) 00:07:34.684 11393.182 - 11443.594: 82.7857% ( 29) 00:07:34.684 11443.594 - 11494.006: 82.9600% ( 26) 00:07:34.684 11494.006 - 11544.418: 83.1210% ( 24) 00:07:34.684 11544.418 - 11594.831: 83.2685% ( 22) 00:07:34.684 11594.831 - 11645.243: 83.4295% ( 24) 00:07:34.684 11645.243 - 11695.655: 83.5367% ( 16) 00:07:34.684 11695.655 - 11746.068: 83.6440% ( 16) 00:07:34.684 11746.068 - 11796.480: 83.7312% ( 13) 00:07:34.684 11796.480 - 11846.892: 83.8050% ( 11) 00:07:34.684 11846.892 - 11897.305: 83.8855% ( 12) 00:07:34.684 11897.305 - 11947.717: 83.9525% ( 10) 00:07:34.684 11947.717 - 11998.129: 84.0397% ( 13) 00:07:34.684 11998.129 - 12048.542: 84.1135% ( 11) 00:07:34.684 12048.542 - 12098.954: 84.1738% ( 9) 00:07:34.684 12098.954 - 12149.366: 84.2275% ( 8) 00:07:34.684 12149.366 - 12199.778: 84.2811% ( 8) 00:07:34.684 12199.778 - 12250.191: 84.3348% ( 8) 00:07:34.684 12250.191 - 12300.603: 84.3951% ( 9) 00:07:34.684 12300.603 - 12351.015: 84.4421% ( 7) 00:07:34.684 12351.015 - 12401.428: 84.4823% ( 6) 00:07:34.684 12401.428 - 12451.840: 84.5225% ( 6) 00:07:34.684 12451.840 - 12502.252: 84.5896% ( 10) 00:07:34.684 12502.252 - 12552.665: 84.6097% ( 3) 00:07:34.684 12552.665 - 12603.077: 84.6231% ( 2) 00:07:34.684 12603.077 - 12653.489: 84.6499% ( 4) 00:07:34.684 12653.489 - 12703.902: 84.6701% ( 3) 00:07:34.684 12703.902 - 12754.314: 84.7036% ( 5) 00:07:34.684 12754.314 - 12804.726: 84.7438% ( 6) 00:07:34.684 12804.726 - 12855.138: 84.8176% ( 11) 00:07:34.684 12855.138 - 12905.551: 84.8712% ( 8) 00:07:34.684 12905.551 - 13006.375: 84.9785% ( 16) 00:07:34.684 13006.375 - 13107.200: 85.1194% ( 21) 00:07:34.684 13107.200 - 13208.025: 85.3205% ( 30) 00:07:34.684 13208.025 - 13308.849: 85.5284% ( 31) 00:07:34.684 13308.849 - 13409.674: 85.7631% ( 35) 00:07:34.684 13409.674 - 13510.498: 86.0984% ( 50) 00:07:34.684 13510.498 - 13611.323: 86.4472% ( 52) 00:07:34.684 13611.323 - 13712.148: 86.7892% ( 51) 00:07:34.684 13712.148 - 13812.972: 87.1043% ( 47) 00:07:34.684 13812.972 - 13913.797: 87.3860% ( 42) 00:07:34.684 13913.797 - 14014.622: 87.7548% ( 55) 00:07:34.684 14014.622 - 14115.446: 88.1639% ( 61) 00:07:34.684 14115.446 - 14216.271: 88.6199% ( 68) 00:07:34.684 14216.271 - 14317.095: 89.0893% ( 70) 00:07:34.684 14317.095 - 14417.920: 89.5185% ( 64) 00:07:34.684 14417.920 - 14518.745: 89.8873% ( 55) 00:07:34.684 14518.745 - 14619.569: 90.2361% ( 52) 00:07:34.684 14619.569 - 14720.394: 90.5579% ( 48) 00:07:34.684 14720.394 - 14821.218: 90.8798% ( 48) 00:07:34.684 14821.218 - 14922.043: 91.1615% ( 42) 00:07:34.684 14922.043 - 15022.868: 91.5169% ( 53) 00:07:34.684 15022.868 - 15123.692: 91.7717% ( 38) 00:07:34.684 15123.692 - 15224.517: 91.9260% ( 23) 00:07:34.684 15224.517 - 15325.342: 92.0467% ( 18) 00:07:34.684 15325.342 - 15426.166: 92.1070% ( 9) 00:07:34.684 15426.166 - 15526.991: 92.1808% ( 11) 00:07:34.684 15526.991 - 15627.815: 92.3552% ( 26) 00:07:34.684 15627.815 - 15728.640: 92.5764% ( 33) 00:07:34.684 15728.640 - 15829.465: 92.7441% ( 25) 00:07:34.684 15829.465 - 15930.289: 92.9185% ( 26) 00:07:34.684 15930.289 - 16031.114: 93.1062% ( 28) 00:07:34.684 16031.114 - 16131.938: 93.2940% ( 28) 00:07:34.684 16131.938 - 16232.763: 93.5354% ( 36) 00:07:34.684 16232.763 - 16333.588: 93.7701% ( 35) 00:07:34.684 16333.588 - 16434.412: 94.0048% ( 35) 00:07:34.684 16434.412 - 16535.237: 94.2127% ( 31) 00:07:34.684 16535.237 - 16636.062: 94.4072% ( 29) 00:07:34.684 16636.062 - 16736.886: 94.5748% ( 25) 00:07:34.684 16736.886 - 16837.711: 94.7559% ( 27) 00:07:34.684 16837.711 - 16938.535: 94.9101% ( 23) 00:07:34.684 16938.535 - 17039.360: 95.0241% ( 17) 00:07:34.684 17039.360 - 17140.185: 95.1448% ( 18) 00:07:34.684 17140.185 - 17241.009: 95.2723% ( 19) 00:07:34.684 17241.009 - 17341.834: 95.4936% ( 33) 00:07:34.684 17341.834 - 17442.658: 95.7014% ( 31) 00:07:34.684 17442.658 - 17543.483: 95.9697% ( 40) 00:07:34.684 17543.483 - 17644.308: 96.2111% ( 36) 00:07:34.684 17644.308 - 17745.132: 96.4324% ( 33) 00:07:34.684 17745.132 - 17845.957: 96.6470% ( 32) 00:07:34.684 17845.957 - 17946.782: 96.8281% ( 27) 00:07:34.684 17946.782 - 18047.606: 97.0024% ( 26) 00:07:34.684 18047.606 - 18148.431: 97.1835% ( 27) 00:07:34.684 18148.431 - 18249.255: 97.3780% ( 29) 00:07:34.684 18249.255 - 18350.080: 97.5456% ( 25) 00:07:34.684 18350.080 - 18450.905: 97.6730% ( 19) 00:07:34.684 18450.905 - 18551.729: 97.7870% ( 17) 00:07:34.684 18551.729 - 18652.554: 97.8675% ( 12) 00:07:34.684 18652.554 - 18753.378: 97.9547% ( 13) 00:07:34.684 18753.378 - 18854.203: 98.0486% ( 14) 00:07:34.684 18854.203 - 18955.028: 98.1223% ( 11) 00:07:34.684 18955.028 - 19055.852: 98.2162% ( 14) 00:07:34.684 19055.852 - 19156.677: 98.3168% ( 15) 00:07:34.684 19156.677 - 19257.502: 98.3906% ( 11) 00:07:34.684 19257.502 - 19358.326: 98.4844% ( 14) 00:07:34.684 19358.326 - 19459.151: 98.5649% ( 12) 00:07:34.684 19459.151 - 19559.975: 98.6521% ( 13) 00:07:34.684 19559.975 - 19660.800: 98.7661% ( 17) 00:07:34.684 19660.800 - 19761.625: 98.8801% ( 17) 00:07:34.684 19761.625 - 19862.449: 98.9807% ( 15) 00:07:34.684 19862.449 - 19963.274: 99.0477% ( 10) 00:07:34.684 19963.274 - 20064.098: 99.0880% ( 6) 00:07:34.684 20064.098 - 20164.923: 99.1215% ( 5) 00:07:34.684 20164.923 - 20265.748: 99.1416% ( 3) 00:07:34.684 20870.695 - 20971.520: 99.1483% ( 1) 00:07:34.684 20971.520 - 21072.345: 99.1685% ( 3) 00:07:34.684 21072.345 - 21173.169: 99.1886% ( 3) 00:07:34.684 21173.169 - 21273.994: 99.2221% ( 5) 00:07:34.684 21273.994 - 21374.818: 99.2489% ( 4) 00:07:34.684 21374.818 - 21475.643: 99.2690% ( 3) 00:07:34.684 21475.643 - 21576.468: 99.2959% ( 4) 00:07:34.684 21576.468 - 21677.292: 99.3227% ( 4) 00:07:34.684 21677.292 - 21778.117: 99.3428% ( 3) 00:07:34.684 21778.117 - 21878.942: 99.3696% ( 4) 00:07:34.684 21878.942 - 21979.766: 99.3830% ( 2) 00:07:34.684 21979.766 - 22080.591: 99.4099% ( 4) 00:07:34.684 22080.591 - 22181.415: 99.4300% ( 3) 00:07:34.684 22181.415 - 22282.240: 99.4568% ( 4) 00:07:34.684 22282.240 - 22383.065: 99.4769% ( 3) 00:07:34.684 22383.065 - 22483.889: 99.4970% ( 3) 00:07:34.684 22483.889 - 22584.714: 99.5239% ( 4) 00:07:34.685 22584.714 - 22685.538: 99.5507% ( 4) 00:07:34.685 22685.538 - 22786.363: 99.5708% ( 3) 00:07:34.685 27625.945 - 27827.594: 99.6111% ( 6) 00:07:34.685 27827.594 - 28029.243: 99.6513% ( 6) 00:07:34.685 28029.243 - 28230.892: 99.6982% ( 7) 00:07:34.685 28230.892 - 28432.542: 99.7385% ( 6) 00:07:34.685 28432.542 - 28634.191: 99.7854% ( 7) 00:07:34.685 28634.191 - 28835.840: 99.8323% ( 7) 00:07:34.685 28835.840 - 29037.489: 99.8793% ( 7) 00:07:34.685 29037.489 - 29239.138: 99.9195% ( 6) 00:07:34.685 29239.138 - 29440.788: 99.9665% ( 7) 00:07:34.685 29440.788 - 29642.437: 100.0000% ( 5) 00:07:34.685 00:07:34.685 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.685 ============================================================================== 00:07:34.685 Range in us Cumulative IO count 00:07:34.685 5721.797 - 5747.003: 0.0335% ( 5) 00:07:34.685 5747.003 - 5772.209: 0.0738% ( 6) 00:07:34.685 5772.209 - 5797.415: 0.1073% ( 5) 00:07:34.685 5797.415 - 5822.622: 0.1945% ( 13) 00:07:34.685 5822.622 - 5847.828: 0.3822% ( 28) 00:07:34.685 5847.828 - 5873.034: 0.6907% ( 46) 00:07:34.685 5873.034 - 5898.240: 1.2741% ( 87) 00:07:34.685 5898.240 - 5923.446: 1.9984% ( 108) 00:07:34.685 5923.446 - 5948.652: 2.8702% ( 130) 00:07:34.685 5948.652 - 5973.858: 3.8224% ( 142) 00:07:34.685 5973.858 - 5999.065: 4.7479% ( 138) 00:07:34.685 5999.065 - 6024.271: 5.7739% ( 153) 00:07:34.685 6024.271 - 6049.477: 7.0480% ( 190) 00:07:34.685 6049.477 - 6074.683: 8.4026% ( 202) 00:07:34.685 6074.683 - 6099.889: 9.7639% ( 203) 00:07:34.685 6099.889 - 6125.095: 11.1052% ( 200) 00:07:34.685 6125.095 - 6150.302: 12.4195% ( 196) 00:07:34.685 6150.302 - 6175.508: 13.8278% ( 210) 00:07:34.685 6175.508 - 6200.714: 15.2830% ( 217) 00:07:34.685 6200.714 - 6225.920: 16.7851% ( 224) 00:07:34.685 6225.920 - 6251.126: 18.4348% ( 246) 00:07:34.685 6251.126 - 6276.332: 20.0510% ( 241) 00:07:34.685 6276.332 - 6301.538: 21.6470% ( 238) 00:07:34.685 6301.538 - 6326.745: 23.3101% ( 248) 00:07:34.685 6326.745 - 6351.951: 25.0067% ( 253) 00:07:34.685 6351.951 - 6377.157: 26.7704% ( 263) 00:07:34.685 6377.157 - 6402.363: 28.5207% ( 261) 00:07:34.685 6402.363 - 6427.569: 30.2776% ( 262) 00:07:34.685 6427.569 - 6452.775: 32.0480% ( 264) 00:07:34.685 6452.775 - 6503.188: 35.6357% ( 535) 00:07:34.685 6503.188 - 6553.600: 39.2905% ( 545) 00:07:34.685 6553.600 - 6604.012: 42.7843% ( 521) 00:07:34.685 6604.012 - 6654.425: 46.1306% ( 499) 00:07:34.685 6654.425 - 6704.837: 49.2020% ( 458) 00:07:34.685 6704.837 - 6755.249: 52.0587% ( 426) 00:07:34.685 6755.249 - 6805.662: 54.3656% ( 344) 00:07:34.685 6805.662 - 6856.074: 56.3975% ( 303) 00:07:34.685 6856.074 - 6906.486: 58.0271% ( 243) 00:07:34.685 6906.486 - 6956.898: 59.4421% ( 211) 00:07:34.685 6956.898 - 7007.311: 60.5351% ( 163) 00:07:34.685 7007.311 - 7057.723: 61.3935% ( 128) 00:07:34.685 7057.723 - 7108.135: 62.0775% ( 102) 00:07:34.685 7108.135 - 7158.548: 62.6475% ( 85) 00:07:34.685 7158.548 - 7208.960: 63.0968% ( 67) 00:07:34.685 7208.960 - 7259.372: 63.5730% ( 71) 00:07:34.685 7259.372 - 7309.785: 63.9619% ( 58) 00:07:34.685 7309.785 - 7360.197: 64.2234% ( 39) 00:07:34.685 7360.197 - 7410.609: 64.4649% ( 36) 00:07:34.685 7410.609 - 7461.022: 64.6795% ( 32) 00:07:34.685 7461.022 - 7511.434: 64.9678% ( 43) 00:07:34.685 7511.434 - 7561.846: 65.2696% ( 45) 00:07:34.685 7561.846 - 7612.258: 65.4909% ( 33) 00:07:34.685 7612.258 - 7662.671: 65.7055% ( 32) 00:07:34.685 7662.671 - 7713.083: 65.9670% ( 39) 00:07:34.685 7713.083 - 7763.495: 66.2017% ( 35) 00:07:34.685 7763.495 - 7813.908: 66.4230% ( 33) 00:07:34.685 7813.908 - 7864.320: 66.6041% ( 27) 00:07:34.685 7864.320 - 7914.732: 66.8053% ( 30) 00:07:34.685 7914.732 - 7965.145: 67.0266% ( 33) 00:07:34.685 7965.145 - 8015.557: 67.2076% ( 27) 00:07:34.685 8015.557 - 8065.969: 67.3417% ( 20) 00:07:34.685 8065.969 - 8116.382: 67.5027% ( 24) 00:07:34.685 8116.382 - 8166.794: 67.7039% ( 30) 00:07:34.685 8166.794 - 8217.206: 67.9922% ( 43) 00:07:34.685 8217.206 - 8267.618: 68.2202% ( 34) 00:07:34.685 8267.618 - 8318.031: 68.4482% ( 34) 00:07:34.685 8318.031 - 8368.443: 68.6494% ( 30) 00:07:34.685 8368.443 - 8418.855: 68.8774% ( 34) 00:07:34.685 8418.855 - 8469.268: 69.1591% ( 42) 00:07:34.685 8469.268 - 8519.680: 69.4273% ( 40) 00:07:34.685 8519.680 - 8570.092: 69.7023% ( 41) 00:07:34.685 8570.092 - 8620.505: 70.0107% ( 46) 00:07:34.685 8620.505 - 8670.917: 70.2991% ( 43) 00:07:34.685 8670.917 - 8721.329: 70.6813% ( 57) 00:07:34.685 8721.329 - 8771.742: 71.0166% ( 50) 00:07:34.685 8771.742 - 8822.154: 71.3050% ( 43) 00:07:34.685 8822.154 - 8872.566: 71.5933% ( 43) 00:07:34.685 8872.566 - 8922.978: 71.8750% ( 42) 00:07:34.685 8922.978 - 8973.391: 72.1365% ( 39) 00:07:34.685 8973.391 - 9023.803: 72.3712% ( 35) 00:07:34.685 9023.803 - 9074.215: 72.6462% ( 41) 00:07:34.685 9074.215 - 9124.628: 72.9010% ( 38) 00:07:34.685 9124.628 - 9175.040: 73.1357% ( 35) 00:07:34.685 9175.040 - 9225.452: 73.4375% ( 45) 00:07:34.685 9225.452 - 9275.865: 73.6119% ( 26) 00:07:34.685 9275.865 - 9326.277: 73.7728% ( 24) 00:07:34.685 9326.277 - 9376.689: 74.0142% ( 36) 00:07:34.685 9376.689 - 9427.102: 74.2355% ( 33) 00:07:34.685 9427.102 - 9477.514: 74.4702% ( 35) 00:07:34.685 9477.514 - 9527.926: 74.7116% ( 36) 00:07:34.685 9527.926 - 9578.338: 74.9195% ( 31) 00:07:34.685 9578.338 - 9628.751: 75.1744% ( 38) 00:07:34.685 9628.751 - 9679.163: 75.3688% ( 29) 00:07:34.685 9679.163 - 9729.575: 75.6170% ( 37) 00:07:34.685 9729.575 - 9779.988: 75.8181% ( 30) 00:07:34.685 9779.988 - 9830.400: 76.0059% ( 28) 00:07:34.685 9830.400 - 9880.812: 76.2473% ( 36) 00:07:34.685 9880.812 - 9931.225: 76.4485% ( 30) 00:07:34.685 9931.225 - 9981.637: 76.6564% ( 31) 00:07:34.685 9981.637 - 10032.049: 76.8374% ( 27) 00:07:34.685 10032.049 - 10082.462: 77.0118% ( 26) 00:07:34.685 10082.462 - 10132.874: 77.2197% ( 31) 00:07:34.685 10132.874 - 10183.286: 77.4142% ( 29) 00:07:34.685 10183.286 - 10233.698: 77.6153% ( 30) 00:07:34.685 10233.698 - 10284.111: 77.8031% ( 28) 00:07:34.685 10284.111 - 10334.523: 77.9775% ( 26) 00:07:34.685 10334.523 - 10384.935: 78.1585% ( 27) 00:07:34.685 10384.935 - 10435.348: 78.3329% ( 26) 00:07:34.685 10435.348 - 10485.760: 78.5274% ( 29) 00:07:34.685 10485.760 - 10536.172: 78.7487% ( 33) 00:07:34.685 10536.172 - 10586.585: 78.9431% ( 29) 00:07:34.685 10586.585 - 10636.997: 79.1443% ( 30) 00:07:34.685 10636.997 - 10687.409: 79.3254% ( 27) 00:07:34.685 10687.409 - 10737.822: 79.5400% ( 32) 00:07:34.685 10737.822 - 10788.234: 79.7344% ( 29) 00:07:34.685 10788.234 - 10838.646: 79.9289% ( 29) 00:07:34.685 10838.646 - 10889.058: 80.1837% ( 38) 00:07:34.685 10889.058 - 10939.471: 80.4922% ( 46) 00:07:34.685 10939.471 - 10989.883: 80.7806% ( 43) 00:07:34.685 10989.883 - 11040.295: 80.9683% ( 28) 00:07:34.685 11040.295 - 11090.708: 81.1896% ( 33) 00:07:34.685 11090.708 - 11141.120: 81.4311% ( 36) 00:07:34.685 11141.120 - 11191.532: 81.6591% ( 34) 00:07:34.685 11191.532 - 11241.945: 81.9072% ( 37) 00:07:34.685 11241.945 - 11292.357: 82.1352% ( 34) 00:07:34.685 11292.357 - 11342.769: 82.3766% ( 36) 00:07:34.685 11342.769 - 11393.182: 82.5979% ( 33) 00:07:34.685 11393.182 - 11443.594: 82.7991% ( 30) 00:07:34.685 11443.594 - 11494.006: 82.9869% ( 28) 00:07:34.685 11494.006 - 11544.418: 83.1813% ( 29) 00:07:34.685 11544.418 - 11594.831: 83.3356% ( 23) 00:07:34.685 11594.831 - 11645.243: 83.5099% ( 26) 00:07:34.685 11645.243 - 11695.655: 83.6373% ( 19) 00:07:34.685 11695.655 - 11746.068: 83.7245% ( 13) 00:07:34.685 11746.068 - 11796.480: 83.7782% ( 8) 00:07:34.685 11796.480 - 11846.892: 83.8050% ( 4) 00:07:34.685 11846.892 - 11897.305: 83.8385% ( 5) 00:07:34.685 11897.305 - 11947.717: 83.8586% ( 3) 00:07:34.685 11947.717 - 11998.129: 83.9123% ( 8) 00:07:34.685 11998.129 - 12048.542: 83.9391% ( 4) 00:07:34.685 12048.542 - 12098.954: 83.9726% ( 5) 00:07:34.685 12098.954 - 12149.366: 84.0129% ( 6) 00:07:34.685 12149.366 - 12199.778: 84.0397% ( 4) 00:07:34.685 12199.778 - 12250.191: 84.0732% ( 5) 00:07:34.685 12250.191 - 12300.603: 84.1336% ( 9) 00:07:34.685 12300.603 - 12351.015: 84.1671% ( 5) 00:07:34.685 12351.015 - 12401.428: 84.2275% ( 9) 00:07:34.685 12401.428 - 12451.840: 84.3348% ( 16) 00:07:34.685 12451.840 - 12502.252: 84.3951% ( 9) 00:07:34.685 12502.252 - 12552.665: 84.4756% ( 12) 00:07:34.685 12552.665 - 12603.077: 84.5494% ( 11) 00:07:34.685 12603.077 - 12653.489: 84.6298% ( 12) 00:07:34.685 12653.489 - 12703.902: 84.7103% ( 12) 00:07:34.685 12703.902 - 12754.314: 84.8042% ( 14) 00:07:34.685 12754.314 - 12804.726: 84.9115% ( 16) 00:07:34.685 12804.726 - 12855.138: 85.0188% ( 16) 00:07:34.685 12855.138 - 12905.551: 85.1194% ( 15) 00:07:34.685 12905.551 - 13006.375: 85.3541% ( 35) 00:07:34.685 13006.375 - 13107.200: 85.5553% ( 30) 00:07:34.685 13107.200 - 13208.025: 85.8704% ( 47) 00:07:34.685 13208.025 - 13308.849: 86.2057% ( 50) 00:07:34.685 13308.849 - 13409.674: 86.5209% ( 47) 00:07:34.685 13409.674 - 13510.498: 86.8562% ( 50) 00:07:34.685 13510.498 - 13611.323: 87.2116% ( 53) 00:07:34.685 13611.323 - 13712.148: 87.5402% ( 49) 00:07:34.685 13712.148 - 13812.972: 87.9359% ( 59) 00:07:34.685 13812.972 - 13913.797: 88.3114% ( 56) 00:07:34.685 13913.797 - 14014.622: 88.6534% ( 51) 00:07:34.686 14014.622 - 14115.446: 88.9753% ( 48) 00:07:34.686 14115.446 - 14216.271: 89.2771% ( 45) 00:07:34.686 14216.271 - 14317.095: 89.5587% ( 42) 00:07:34.686 14317.095 - 14417.920: 89.8337% ( 41) 00:07:34.686 14417.920 - 14518.745: 90.0416% ( 31) 00:07:34.686 14518.745 - 14619.569: 90.2361% ( 29) 00:07:34.686 14619.569 - 14720.394: 90.4506% ( 32) 00:07:34.686 14720.394 - 14821.218: 90.6719% ( 33) 00:07:34.686 14821.218 - 14922.043: 90.8731% ( 30) 00:07:34.686 14922.043 - 15022.868: 91.1078% ( 35) 00:07:34.686 15022.868 - 15123.692: 91.3492% ( 36) 00:07:34.686 15123.692 - 15224.517: 91.5705% ( 33) 00:07:34.686 15224.517 - 15325.342: 91.7986% ( 34) 00:07:34.686 15325.342 - 15426.166: 92.0064% ( 31) 00:07:34.686 15426.166 - 15526.991: 92.1942% ( 28) 00:07:34.686 15526.991 - 15627.815: 92.3619% ( 25) 00:07:34.686 15627.815 - 15728.640: 92.5496% ( 28) 00:07:34.686 15728.640 - 15829.465: 92.6837% ( 20) 00:07:34.686 15829.465 - 15930.289: 92.7910% ( 16) 00:07:34.686 15930.289 - 16031.114: 92.9117% ( 18) 00:07:34.686 16031.114 - 16131.938: 93.0123% ( 15) 00:07:34.686 16131.938 - 16232.763: 93.1129% ( 15) 00:07:34.686 16232.763 - 16333.588: 93.2403% ( 19) 00:07:34.686 16333.588 - 16434.412: 93.4013% ( 24) 00:07:34.686 16434.412 - 16535.237: 93.5823% ( 27) 00:07:34.686 16535.237 - 16636.062: 93.7567% ( 26) 00:07:34.686 16636.062 - 16736.886: 93.9445% ( 28) 00:07:34.686 16736.886 - 16837.711: 94.1993% ( 38) 00:07:34.686 16837.711 - 16938.535: 94.4810% ( 42) 00:07:34.686 16938.535 - 17039.360: 94.7626% ( 42) 00:07:34.686 17039.360 - 17140.185: 95.0308% ( 40) 00:07:34.686 17140.185 - 17241.009: 95.2857% ( 38) 00:07:34.686 17241.009 - 17341.834: 95.5137% ( 34) 00:07:34.686 17341.834 - 17442.658: 95.7014% ( 28) 00:07:34.686 17442.658 - 17543.483: 95.8825% ( 27) 00:07:34.686 17543.483 - 17644.308: 96.0904% ( 31) 00:07:34.686 17644.308 - 17745.132: 96.3251% ( 35) 00:07:34.686 17745.132 - 17845.957: 96.5598% ( 35) 00:07:34.686 17845.957 - 17946.782: 96.7476% ( 28) 00:07:34.686 17946.782 - 18047.606: 96.9286% ( 27) 00:07:34.686 18047.606 - 18148.431: 97.1231% ( 29) 00:07:34.686 18148.431 - 18249.255: 97.3310% ( 31) 00:07:34.686 18249.255 - 18350.080: 97.5255% ( 29) 00:07:34.686 18350.080 - 18450.905: 97.7200% ( 29) 00:07:34.686 18450.905 - 18551.729: 97.9010% ( 27) 00:07:34.686 18551.729 - 18652.554: 98.0821% ( 27) 00:07:34.686 18652.554 - 18753.378: 98.2967% ( 32) 00:07:34.686 18753.378 - 18854.203: 98.4911% ( 29) 00:07:34.686 18854.203 - 18955.028: 98.6320% ( 21) 00:07:34.686 18955.028 - 19055.852: 98.7661% ( 20) 00:07:34.686 19055.852 - 19156.677: 98.8801% ( 17) 00:07:34.686 19156.677 - 19257.502: 98.9405% ( 9) 00:07:34.686 19257.502 - 19358.326: 99.0008% ( 9) 00:07:34.686 19358.326 - 19459.151: 99.0746% ( 11) 00:07:34.686 19459.151 - 19559.975: 99.1617% ( 13) 00:07:34.686 19559.975 - 19660.800: 99.2154% ( 8) 00:07:34.686 19660.800 - 19761.625: 99.2422% ( 4) 00:07:34.686 19761.625 - 19862.449: 99.2690% ( 4) 00:07:34.686 19862.449 - 19963.274: 99.2959% ( 4) 00:07:34.686 19963.274 - 20064.098: 99.3227% ( 4) 00:07:34.686 20064.098 - 20164.923: 99.3428% ( 3) 00:07:34.686 20164.923 - 20265.748: 99.3696% ( 4) 00:07:34.686 20265.748 - 20366.572: 99.3965% ( 4) 00:07:34.686 20366.572 - 20467.397: 99.4166% ( 3) 00:07:34.686 20467.397 - 20568.222: 99.4434% ( 4) 00:07:34.686 20568.222 - 20669.046: 99.4702% ( 4) 00:07:34.686 20669.046 - 20769.871: 99.4903% ( 3) 00:07:34.686 20769.871 - 20870.695: 99.5172% ( 4) 00:07:34.686 20870.695 - 20971.520: 99.5440% ( 4) 00:07:34.686 20971.520 - 21072.345: 99.5708% ( 4) 00:07:34.686 26012.751 - 26214.400: 99.5842% ( 2) 00:07:34.686 26214.400 - 26416.049: 99.6245% ( 6) 00:07:34.686 26416.049 - 26617.698: 99.6647% ( 6) 00:07:34.686 26617.698 - 26819.348: 99.7049% ( 6) 00:07:34.686 26819.348 - 27020.997: 99.7519% ( 7) 00:07:34.686 27020.997 - 27222.646: 99.7921% ( 6) 00:07:34.686 27222.646 - 27424.295: 99.8391% ( 7) 00:07:34.686 27424.295 - 27625.945: 99.8860% ( 7) 00:07:34.686 27625.945 - 27827.594: 99.9262% ( 6) 00:07:34.686 27827.594 - 28029.243: 99.9665% ( 6) 00:07:34.686 28029.243 - 28230.892: 100.0000% ( 5) 00:07:34.686 00:07:34.686 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.686 ============================================================================== 00:07:34.686 Range in us Cumulative IO count 00:07:34.686 5721.797 - 5747.003: 0.0134% ( 2) 00:07:34.686 5747.003 - 5772.209: 0.0671% ( 8) 00:07:34.686 5772.209 - 5797.415: 0.2012% ( 20) 00:07:34.686 5797.415 - 5822.622: 0.3755% ( 26) 00:07:34.686 5822.622 - 5847.828: 0.6438% ( 40) 00:07:34.686 5847.828 - 5873.034: 1.0595% ( 62) 00:07:34.686 5873.034 - 5898.240: 1.6296% ( 85) 00:07:34.686 5898.240 - 5923.446: 2.1862% ( 83) 00:07:34.686 5923.446 - 5948.652: 2.8970% ( 106) 00:07:34.686 5948.652 - 5973.858: 3.8492% ( 142) 00:07:34.686 5973.858 - 5999.065: 4.8619% ( 151) 00:07:34.686 5999.065 - 6024.271: 5.9013% ( 155) 00:07:34.686 6024.271 - 6049.477: 6.9608% ( 158) 00:07:34.686 6049.477 - 6074.683: 8.1679% ( 180) 00:07:34.686 6074.683 - 6099.889: 9.4823% ( 196) 00:07:34.686 6099.889 - 6125.095: 10.8235% ( 200) 00:07:34.686 6125.095 - 6150.302: 12.1714% ( 201) 00:07:34.686 6150.302 - 6175.508: 13.5663% ( 208) 00:07:34.686 6175.508 - 6200.714: 15.2025% ( 244) 00:07:34.686 6200.714 - 6225.920: 16.8455% ( 245) 00:07:34.686 6225.920 - 6251.126: 18.7098% ( 278) 00:07:34.686 6251.126 - 6276.332: 20.2924% ( 236) 00:07:34.686 6276.332 - 6301.538: 21.8951% ( 239) 00:07:34.686 6301.538 - 6326.745: 23.4509% ( 232) 00:07:34.686 6326.745 - 6351.951: 25.2884% ( 274) 00:07:34.686 6351.951 - 6377.157: 26.9447% ( 247) 00:07:34.686 6377.157 - 6402.363: 28.6682% ( 257) 00:07:34.686 6402.363 - 6427.569: 30.2508% ( 236) 00:07:34.686 6427.569 - 6452.775: 32.0078% ( 262) 00:07:34.686 6452.775 - 6503.188: 35.4614% ( 515) 00:07:34.686 6503.188 - 6553.600: 39.0558% ( 536) 00:07:34.686 6553.600 - 6604.012: 42.6301% ( 533) 00:07:34.686 6604.012 - 6654.425: 45.9496% ( 495) 00:07:34.686 6654.425 - 6704.837: 48.9002% ( 440) 00:07:34.686 6704.837 - 6755.249: 51.5625% ( 397) 00:07:34.686 6755.249 - 6805.662: 54.0102% ( 365) 00:07:34.686 6805.662 - 6856.074: 55.8074% ( 268) 00:07:34.686 6856.074 - 6906.486: 57.4303% ( 242) 00:07:34.686 6906.486 - 6956.898: 58.8653% ( 214) 00:07:34.686 6956.898 - 7007.311: 60.0657% ( 179) 00:07:34.686 7007.311 - 7057.723: 60.9509% ( 132) 00:07:34.686 7057.723 - 7108.135: 61.6550% ( 105) 00:07:34.686 7108.135 - 7158.548: 62.2452% ( 88) 00:07:34.686 7158.548 - 7208.960: 62.8085% ( 84) 00:07:34.686 7208.960 - 7259.372: 63.2712% ( 69) 00:07:34.686 7259.372 - 7309.785: 63.7004% ( 64) 00:07:34.686 7309.785 - 7360.197: 64.0759% ( 56) 00:07:34.686 7360.197 - 7410.609: 64.4246% ( 52) 00:07:34.686 7410.609 - 7461.022: 64.6929% ( 40) 00:07:34.686 7461.022 - 7511.434: 64.9544% ( 39) 00:07:34.686 7511.434 - 7561.846: 65.2025% ( 37) 00:07:34.686 7561.846 - 7612.258: 65.4506% ( 37) 00:07:34.686 7612.258 - 7662.671: 65.6988% ( 37) 00:07:34.686 7662.671 - 7713.083: 65.9536% ( 38) 00:07:34.686 7713.083 - 7763.495: 66.1615% ( 31) 00:07:34.686 7763.495 - 7813.908: 66.2822% ( 18) 00:07:34.686 7813.908 - 7864.320: 66.4230% ( 21) 00:07:34.686 7864.320 - 7914.732: 66.5571% ( 20) 00:07:34.686 7914.732 - 7965.145: 66.7248% ( 25) 00:07:34.686 7965.145 - 8015.557: 66.9193% ( 29) 00:07:34.686 8015.557 - 8065.969: 67.1070% ( 28) 00:07:34.686 8065.969 - 8116.382: 67.2613% ( 23) 00:07:34.686 8116.382 - 8166.794: 67.4557% ( 29) 00:07:34.686 8166.794 - 8217.206: 67.6569% ( 30) 00:07:34.686 8217.206 - 8267.618: 67.8246% ( 25) 00:07:34.686 8267.618 - 8318.031: 68.0928% ( 40) 00:07:34.686 8318.031 - 8368.443: 68.3879% ( 44) 00:07:34.686 8368.443 - 8418.855: 68.6226% ( 35) 00:07:34.686 8418.855 - 8469.268: 68.8439% ( 33) 00:07:34.686 8469.268 - 8519.680: 69.1859% ( 51) 00:07:34.686 8519.680 - 8570.092: 69.4944% ( 46) 00:07:34.686 8570.092 - 8620.505: 69.7827% ( 43) 00:07:34.686 8620.505 - 8670.917: 70.1113% ( 49) 00:07:34.686 8670.917 - 8721.329: 70.4600% ( 52) 00:07:34.686 8721.329 - 8771.742: 70.8423% ( 57) 00:07:34.686 8771.742 - 8822.154: 71.2178% ( 56) 00:07:34.686 8822.154 - 8872.566: 71.5531% ( 50) 00:07:34.686 8872.566 - 8922.978: 71.8817% ( 49) 00:07:34.686 8922.978 - 8973.391: 72.1567% ( 41) 00:07:34.686 8973.391 - 9023.803: 72.4718% ( 47) 00:07:34.686 9023.803 - 9074.215: 72.7535% ( 42) 00:07:34.686 9074.215 - 9124.628: 73.0217% ( 40) 00:07:34.686 9124.628 - 9175.040: 73.2766% ( 38) 00:07:34.686 9175.040 - 9225.452: 73.4442% ( 25) 00:07:34.686 9225.452 - 9275.865: 73.6856% ( 36) 00:07:34.686 9275.865 - 9326.277: 73.8868% ( 30) 00:07:34.686 9326.277 - 9376.689: 74.1014% ( 32) 00:07:34.686 9376.689 - 9427.102: 74.3361% ( 35) 00:07:34.686 9427.102 - 9477.514: 74.5775% ( 36) 00:07:34.686 9477.514 - 9527.926: 74.8122% ( 35) 00:07:34.686 9527.926 - 9578.338: 75.0469% ( 35) 00:07:34.686 9578.338 - 9628.751: 75.2749% ( 34) 00:07:34.686 9628.751 - 9679.163: 75.4560% ( 27) 00:07:34.686 9679.163 - 9729.575: 75.5901% ( 20) 00:07:34.686 9729.575 - 9779.988: 75.7511% ( 24) 00:07:34.686 9779.988 - 9830.400: 75.9120% ( 24) 00:07:34.686 9830.400 - 9880.812: 76.0595% ( 22) 00:07:34.686 9880.812 - 9931.225: 76.2138% ( 23) 00:07:34.686 9931.225 - 9981.637: 76.3814% ( 25) 00:07:34.686 9981.637 - 10032.049: 76.5424% ( 24) 00:07:34.686 10032.049 - 10082.462: 76.7637% ( 33) 00:07:34.686 10082.462 - 10132.874: 76.9179% ( 23) 00:07:34.686 10132.874 - 10183.286: 77.0789% ( 24) 00:07:34.687 10183.286 - 10233.698: 77.3002% ( 33) 00:07:34.687 10233.698 - 10284.111: 77.4477% ( 22) 00:07:34.687 10284.111 - 10334.523: 77.6355% ( 28) 00:07:34.687 10334.523 - 10384.935: 77.8031% ( 25) 00:07:34.687 10384.935 - 10435.348: 78.0445% ( 36) 00:07:34.687 10435.348 - 10485.760: 78.2792% ( 35) 00:07:34.687 10485.760 - 10536.172: 78.5072% ( 34) 00:07:34.687 10536.172 - 10586.585: 78.7084% ( 30) 00:07:34.687 10586.585 - 10636.997: 78.9297% ( 33) 00:07:34.687 10636.997 - 10687.409: 79.1644% ( 35) 00:07:34.687 10687.409 - 10737.822: 79.3857% ( 33) 00:07:34.687 10737.822 - 10788.234: 79.6070% ( 33) 00:07:34.687 10788.234 - 10838.646: 79.8484% ( 36) 00:07:34.687 10838.646 - 10889.058: 80.0966% ( 37) 00:07:34.687 10889.058 - 10939.471: 80.3581% ( 39) 00:07:34.687 10939.471 - 10989.883: 80.5995% ( 36) 00:07:34.687 10989.883 - 11040.295: 80.8409% ( 36) 00:07:34.687 11040.295 - 11090.708: 81.0823% ( 36) 00:07:34.687 11090.708 - 11141.120: 81.3171% ( 35) 00:07:34.687 11141.120 - 11191.532: 81.5384% ( 33) 00:07:34.687 11191.532 - 11241.945: 81.7261% ( 28) 00:07:34.687 11241.945 - 11292.357: 81.8602% ( 20) 00:07:34.687 11292.357 - 11342.769: 81.9541% ( 14) 00:07:34.687 11342.769 - 11393.182: 82.0413% ( 13) 00:07:34.687 11393.182 - 11443.594: 82.1151% ( 11) 00:07:34.687 11443.594 - 11494.006: 82.2157% ( 15) 00:07:34.687 11494.006 - 11544.418: 82.3364% ( 18) 00:07:34.687 11544.418 - 11594.831: 82.4772% ( 21) 00:07:34.687 11594.831 - 11645.243: 82.6113% ( 20) 00:07:34.687 11645.243 - 11695.655: 82.6784% ( 10) 00:07:34.687 11695.655 - 11746.068: 82.7790% ( 15) 00:07:34.687 11746.068 - 11796.480: 82.8729% ( 14) 00:07:34.687 11796.480 - 11846.892: 82.9734% ( 15) 00:07:34.687 11846.892 - 11897.305: 83.0740% ( 15) 00:07:34.687 11897.305 - 11947.717: 83.1880% ( 17) 00:07:34.687 11947.717 - 11998.129: 83.3222% ( 20) 00:07:34.687 11998.129 - 12048.542: 83.4429% ( 18) 00:07:34.687 12048.542 - 12098.954: 83.5569% ( 17) 00:07:34.687 12098.954 - 12149.366: 83.6776% ( 18) 00:07:34.687 12149.366 - 12199.778: 83.7849% ( 16) 00:07:34.687 12199.778 - 12250.191: 83.8855% ( 15) 00:07:34.687 12250.191 - 12300.603: 83.9928% ( 16) 00:07:34.687 12300.603 - 12351.015: 84.1068% ( 17) 00:07:34.687 12351.015 - 12401.428: 84.2342% ( 19) 00:07:34.687 12401.428 - 12451.840: 84.3616% ( 19) 00:07:34.687 12451.840 - 12502.252: 84.4823% ( 18) 00:07:34.687 12502.252 - 12552.665: 84.5762% ( 14) 00:07:34.687 12552.665 - 12603.077: 84.6701% ( 14) 00:07:34.687 12603.077 - 12653.489: 84.7774% ( 16) 00:07:34.687 12653.489 - 12703.902: 84.8712% ( 14) 00:07:34.687 12703.902 - 12754.314: 84.9920% ( 18) 00:07:34.687 12754.314 - 12804.726: 85.1127% ( 18) 00:07:34.687 12804.726 - 12855.138: 85.2870% ( 26) 00:07:34.687 12855.138 - 12905.551: 85.4547% ( 25) 00:07:34.687 12905.551 - 13006.375: 85.8637% ( 61) 00:07:34.687 13006.375 - 13107.200: 86.2929% ( 64) 00:07:34.687 13107.200 - 13208.025: 86.6953% ( 60) 00:07:34.687 13208.025 - 13308.849: 87.0373% ( 51) 00:07:34.687 13308.849 - 13409.674: 87.3592% ( 48) 00:07:34.687 13409.674 - 13510.498: 87.6609% ( 45) 00:07:34.687 13510.498 - 13611.323: 87.9828% ( 48) 00:07:34.687 13611.323 - 13712.148: 88.3383% ( 53) 00:07:34.687 13712.148 - 13812.972: 88.6065% ( 40) 00:07:34.687 13812.972 - 13913.797: 88.7674% ( 24) 00:07:34.687 13913.797 - 14014.622: 88.9150% ( 22) 00:07:34.687 14014.622 - 14115.446: 89.0692% ( 23) 00:07:34.687 14115.446 - 14216.271: 89.2369% ( 25) 00:07:34.687 14216.271 - 14317.095: 89.3844% ( 22) 00:07:34.687 14317.095 - 14417.920: 89.5252% ( 21) 00:07:34.687 14417.920 - 14518.745: 89.6660% ( 21) 00:07:34.687 14518.745 - 14619.569: 89.8605% ( 29) 00:07:34.687 14619.569 - 14720.394: 90.0751% ( 32) 00:07:34.687 14720.394 - 14821.218: 90.3165% ( 36) 00:07:34.687 14821.218 - 14922.043: 90.5512% ( 35) 00:07:34.687 14922.043 - 15022.868: 90.7591% ( 31) 00:07:34.687 15022.868 - 15123.692: 91.0139% ( 38) 00:07:34.687 15123.692 - 15224.517: 91.2621% ( 37) 00:07:34.687 15224.517 - 15325.342: 91.5236% ( 39) 00:07:34.687 15325.342 - 15426.166: 91.7986% ( 41) 00:07:34.687 15426.166 - 15526.991: 92.0131% ( 32) 00:07:34.687 15526.991 - 15627.815: 92.2411% ( 34) 00:07:34.687 15627.815 - 15728.640: 92.4356% ( 29) 00:07:34.687 15728.640 - 15829.465: 92.5899% ( 23) 00:07:34.687 15829.465 - 15930.289: 92.7374% ( 22) 00:07:34.687 15930.289 - 16031.114: 92.9587% ( 33) 00:07:34.687 16031.114 - 16131.938: 93.0995% ( 21) 00:07:34.687 16131.938 - 16232.763: 93.2269% ( 19) 00:07:34.687 16232.763 - 16333.588: 93.3611% ( 20) 00:07:34.687 16333.588 - 16434.412: 93.5086% ( 22) 00:07:34.687 16434.412 - 16535.237: 93.6896% ( 27) 00:07:34.687 16535.237 - 16636.062: 93.9177% ( 34) 00:07:34.687 16636.062 - 16736.886: 94.1591% ( 36) 00:07:34.687 16736.886 - 16837.711: 94.3871% ( 34) 00:07:34.687 16837.711 - 16938.535: 94.5815% ( 29) 00:07:34.687 16938.535 - 17039.360: 94.7760% ( 29) 00:07:34.687 17039.360 - 17140.185: 94.9906% ( 32) 00:07:34.687 17140.185 - 17241.009: 95.2991% ( 46) 00:07:34.687 17241.009 - 17341.834: 95.6143% ( 47) 00:07:34.687 17341.834 - 17442.658: 95.9764% ( 54) 00:07:34.687 17442.658 - 17543.483: 96.2782% ( 45) 00:07:34.687 17543.483 - 17644.308: 96.5799% ( 45) 00:07:34.687 17644.308 - 17745.132: 96.8683% ( 43) 00:07:34.687 17745.132 - 17845.957: 97.1701% ( 45) 00:07:34.687 17845.957 - 17946.782: 97.4115% ( 36) 00:07:34.687 17946.782 - 18047.606: 97.6864% ( 41) 00:07:34.687 18047.606 - 18148.431: 97.9278% ( 36) 00:07:34.687 18148.431 - 18249.255: 98.1961% ( 40) 00:07:34.687 18249.255 - 18350.080: 98.4174% ( 33) 00:07:34.687 18350.080 - 18450.905: 98.6186% ( 30) 00:07:34.687 18450.905 - 18551.729: 98.7728% ( 23) 00:07:34.687 18551.729 - 18652.554: 98.9002% ( 19) 00:07:34.687 18652.554 - 18753.378: 99.0008% ( 15) 00:07:34.687 18753.378 - 18854.203: 99.1215% ( 18) 00:07:34.687 18854.203 - 18955.028: 99.2154% ( 14) 00:07:34.687 18955.028 - 19055.852: 99.2825% ( 10) 00:07:34.687 19055.852 - 19156.677: 99.3562% ( 11) 00:07:34.687 19156.677 - 19257.502: 99.3965% ( 6) 00:07:34.687 19257.502 - 19358.326: 99.4434% ( 7) 00:07:34.687 19358.326 - 19459.151: 99.4836% ( 6) 00:07:34.687 19459.151 - 19559.975: 99.5105% ( 4) 00:07:34.687 19559.975 - 19660.800: 99.5373% ( 4) 00:07:34.687 19660.800 - 19761.625: 99.5641% ( 4) 00:07:34.687 19761.625 - 19862.449: 99.5708% ( 1) 00:07:34.687 24802.855 - 24903.680: 99.5909% ( 3) 00:07:34.687 24903.680 - 25004.505: 99.6111% ( 3) 00:07:34.687 25004.505 - 25105.329: 99.6312% ( 3) 00:07:34.687 25105.329 - 25206.154: 99.6580% ( 4) 00:07:34.687 25206.154 - 25306.978: 99.6781% ( 3) 00:07:34.687 25306.978 - 25407.803: 99.6982% ( 3) 00:07:34.687 25407.803 - 25508.628: 99.7251% ( 4) 00:07:34.687 25508.628 - 25609.452: 99.7452% ( 3) 00:07:34.687 25609.452 - 25710.277: 99.7653% ( 3) 00:07:34.687 25710.277 - 25811.102: 99.7854% ( 3) 00:07:34.687 25811.102 - 26012.751: 99.8323% ( 7) 00:07:34.687 26012.751 - 26214.400: 99.8793% ( 7) 00:07:34.687 26214.400 - 26416.049: 99.9195% ( 6) 00:07:34.687 26416.049 - 26617.698: 99.9665% ( 7) 00:07:34.687 26617.698 - 26819.348: 100.0000% ( 5) 00:07:34.687 00:07:34.687 20:34:51 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:35.623 Initializing NVMe Controllers 00:07:35.623 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.623 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:35.623 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:35.623 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:35.623 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:35.623 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:35.623 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:35.623 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:35.623 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:35.623 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:35.623 Initialization complete. Launching workers. 00:07:35.623 ======================================================== 00:07:35.623 Latency(us) 00:07:35.623 Device Information : IOPS MiB/s Average min max 00:07:35.623 PCIE (0000:00:10.0) NSID 1 from core 0: 11308.23 132.52 11330.05 6044.19 32364.30 00:07:35.623 PCIE (0000:00:11.0) NSID 1 from core 0: 11308.23 132.52 11312.88 6171.86 30111.38 00:07:35.623 PCIE (0000:00:13.0) NSID 1 from core 0: 11308.23 132.52 11295.36 5958.58 28897.16 00:07:35.623 PCIE (0000:00:12.0) NSID 1 from core 0: 11308.23 132.52 11278.33 5963.78 27136.23 00:07:35.623 PCIE (0000:00:12.0) NSID 2 from core 0: 11308.23 132.52 11261.22 6085.71 25360.24 00:07:35.623 PCIE (0000:00:12.0) NSID 3 from core 0: 11372.12 133.27 11180.82 6104.95 20188.50 00:07:35.623 ======================================================== 00:07:35.623 Total : 67913.29 795.86 11276.35 5958.58 32364.30 00:07:35.623 00:07:35.623 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.623 ================================================================================= 00:07:35.623 1.00000% : 6377.157us 00:07:35.623 10.00000% : 6906.486us 00:07:35.623 25.00000% : 7763.495us 00:07:35.623 50.00000% : 11090.708us 00:07:35.623 75.00000% : 13812.972us 00:07:35.623 90.00000% : 15930.289us 00:07:35.623 95.00000% : 17543.483us 00:07:35.623 98.00000% : 19055.852us 00:07:35.623 99.00000% : 25609.452us 00:07:35.623 99.50000% : 30449.034us 00:07:35.623 99.90000% : 31658.929us 00:07:35.623 99.99000% : 32263.877us 00:07:35.623 99.99900% : 32465.526us 00:07:35.623 99.99990% : 32465.526us 00:07:35.624 99.99999% : 32465.526us 00:07:35.624 00:07:35.624 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.624 ================================================================================= 00:07:35.624 1.00000% : 6452.775us 00:07:35.624 10.00000% : 6906.486us 00:07:35.624 25.00000% : 7763.495us 00:07:35.624 50.00000% : 11040.295us 00:07:35.624 75.00000% : 13812.972us 00:07:35.624 90.00000% : 16031.114us 00:07:35.624 95.00000% : 17543.483us 00:07:35.624 98.00000% : 19055.852us 00:07:35.624 99.00000% : 24097.083us 00:07:35.624 99.50000% : 28835.840us 00:07:35.624 99.90000% : 30045.735us 00:07:35.624 99.99000% : 30247.385us 00:07:35.624 99.99900% : 30247.385us 00:07:35.624 99.99990% : 30247.385us 00:07:35.624 99.99999% : 30247.385us 00:07:35.624 00:07:35.624 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.624 ================================================================================= 00:07:35.624 1.00000% : 6402.363us 00:07:35.624 10.00000% : 6906.486us 00:07:35.624 25.00000% : 7763.495us 00:07:35.624 50.00000% : 11241.945us 00:07:35.624 75.00000% : 13812.972us 00:07:35.624 90.00000% : 15930.289us 00:07:35.624 95.00000% : 17543.483us 00:07:35.624 98.00000% : 18854.203us 00:07:35.624 99.00000% : 22383.065us 00:07:35.624 99.50000% : 27625.945us 00:07:35.624 99.90000% : 28634.191us 00:07:35.624 99.99000% : 29037.489us 00:07:35.624 99.99900% : 29037.489us 00:07:35.624 99.99990% : 29037.489us 00:07:35.624 99.99999% : 29037.489us 00:07:35.624 00:07:35.624 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.624 ================================================================================= 00:07:35.624 1.00000% : 6402.363us 00:07:35.624 10.00000% : 6906.486us 00:07:35.624 25.00000% : 7813.908us 00:07:35.624 50.00000% : 11393.182us 00:07:35.624 75.00000% : 13712.148us 00:07:35.624 90.00000% : 15930.289us 00:07:35.624 95.00000% : 17241.009us 00:07:35.624 98.00000% : 18551.729us 00:07:35.624 99.00000% : 20870.695us 00:07:35.624 99.50000% : 25811.102us 00:07:35.624 99.90000% : 27020.997us 00:07:35.624 99.99000% : 27222.646us 00:07:35.624 99.99900% : 27222.646us 00:07:35.624 99.99990% : 27222.646us 00:07:35.624 99.99999% : 27222.646us 00:07:35.624 00:07:35.624 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.624 ================================================================================= 00:07:35.624 1.00000% : 6402.363us 00:07:35.624 10.00000% : 6906.486us 00:07:35.624 25.00000% : 7813.908us 00:07:35.624 50.00000% : 11342.769us 00:07:35.624 75.00000% : 13611.323us 00:07:35.624 90.00000% : 15930.289us 00:07:35.624 95.00000% : 17442.658us 00:07:35.624 98.00000% : 18955.028us 00:07:35.624 99.00000% : 19761.625us 00:07:35.624 99.50000% : 23996.258us 00:07:35.624 99.90000% : 25105.329us 00:07:35.624 99.99000% : 25407.803us 00:07:35.624 99.99900% : 25407.803us 00:07:35.624 99.99990% : 25407.803us 00:07:35.624 99.99999% : 25407.803us 00:07:35.624 00:07:35.624 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.624 ================================================================================= 00:07:35.624 1.00000% : 6402.363us 00:07:35.624 10.00000% : 6906.486us 00:07:35.624 25.00000% : 7763.495us 00:07:35.624 50.00000% : 11191.532us 00:07:35.624 75.00000% : 13712.148us 00:07:35.624 90.00000% : 15627.815us 00:07:35.624 95.00000% : 17039.360us 00:07:35.624 98.00000% : 18652.554us 00:07:35.624 99.00000% : 19156.677us 00:07:35.624 99.50000% : 19559.975us 00:07:35.624 99.90000% : 20064.098us 00:07:35.624 99.99000% : 20164.923us 00:07:35.624 99.99900% : 20265.748us 00:07:35.624 99.99990% : 20265.748us 00:07:35.624 99.99999% : 20265.748us 00:07:35.624 00:07:35.624 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.624 ============================================================================== 00:07:35.624 Range in us Cumulative IO count 00:07:35.624 6024.271 - 6049.477: 0.0088% ( 1) 00:07:35.624 6049.477 - 6074.683: 0.0265% ( 2) 00:07:35.624 6074.683 - 6099.889: 0.0441% ( 2) 00:07:35.624 6099.889 - 6125.095: 0.0530% ( 1) 00:07:35.624 6125.095 - 6150.302: 0.0883% ( 4) 00:07:35.624 6150.302 - 6175.508: 0.1501% ( 7) 00:07:35.624 6175.508 - 6200.714: 0.1766% ( 3) 00:07:35.624 6200.714 - 6225.920: 0.2207% ( 5) 00:07:35.624 6225.920 - 6251.126: 0.2913% ( 8) 00:07:35.624 6251.126 - 6276.332: 0.4237% ( 15) 00:07:35.624 6276.332 - 6301.538: 0.5385% ( 13) 00:07:35.624 6301.538 - 6326.745: 0.7239% ( 21) 00:07:35.624 6326.745 - 6351.951: 0.9357% ( 24) 00:07:35.624 6351.951 - 6377.157: 1.1917% ( 29) 00:07:35.624 6377.157 - 6402.363: 1.4919% ( 34) 00:07:35.624 6402.363 - 6427.569: 1.8185% ( 37) 00:07:35.624 6427.569 - 6452.775: 2.2422% ( 48) 00:07:35.624 6452.775 - 6503.188: 3.1338% ( 101) 00:07:35.624 6503.188 - 6553.600: 4.0696% ( 106) 00:07:35.624 6553.600 - 6604.012: 5.0053% ( 106) 00:07:35.624 6604.012 - 6654.425: 5.8616% ( 97) 00:07:35.624 6654.425 - 6704.837: 6.8503% ( 112) 00:07:35.624 6704.837 - 6755.249: 7.6977% ( 96) 00:07:35.624 6755.249 - 6805.662: 8.5275% ( 94) 00:07:35.624 6805.662 - 6856.074: 9.5162% ( 112) 00:07:35.624 6856.074 - 6906.486: 10.3372% ( 93) 00:07:35.624 6906.486 - 6956.898: 11.0699% ( 83) 00:07:35.624 6956.898 - 7007.311: 11.9527% ( 100) 00:07:35.624 7007.311 - 7057.723: 12.7207% ( 87) 00:07:35.624 7057.723 - 7108.135: 13.5328% ( 92) 00:07:35.624 7108.135 - 7158.548: 14.3362% ( 91) 00:07:35.624 7158.548 - 7208.960: 15.0777% ( 84) 00:07:35.624 7208.960 - 7259.372: 15.7662% ( 78) 00:07:35.624 7259.372 - 7309.785: 16.3665% ( 68) 00:07:35.624 7309.785 - 7360.197: 17.0992% ( 83) 00:07:35.624 7360.197 - 7410.609: 18.0791% ( 111) 00:07:35.624 7410.609 - 7461.022: 19.1384% ( 120) 00:07:35.624 7461.022 - 7511.434: 19.9241% ( 89) 00:07:35.624 7511.434 - 7561.846: 20.8775% ( 108) 00:07:35.624 7561.846 - 7612.258: 22.5724% ( 192) 00:07:35.624 7612.258 - 7662.671: 23.6758% ( 125) 00:07:35.624 7662.671 - 7713.083: 24.4968% ( 93) 00:07:35.624 7713.083 - 7763.495: 25.1854% ( 78) 00:07:35.624 7763.495 - 7813.908: 26.0064% ( 93) 00:07:35.624 7813.908 - 7864.320: 26.8980% ( 101) 00:07:35.624 7864.320 - 7914.732: 27.8778% ( 111) 00:07:35.624 7914.732 - 7965.145: 28.5752% ( 79) 00:07:35.624 7965.145 - 8015.557: 29.2108% ( 72) 00:07:35.624 8015.557 - 8065.969: 29.6963% ( 55) 00:07:35.624 8065.969 - 8116.382: 29.9523% ( 29) 00:07:35.624 8116.382 - 8166.794: 30.2348% ( 32) 00:07:35.624 8166.794 - 8217.206: 30.5261% ( 33) 00:07:35.624 8217.206 - 8267.618: 30.7556% ( 26) 00:07:35.624 8267.618 - 8318.031: 30.9675% ( 24) 00:07:35.624 8318.031 - 8368.443: 31.2588% ( 33) 00:07:35.624 8368.443 - 8418.855: 31.5413% ( 32) 00:07:35.624 8418.855 - 8469.268: 31.7444% ( 23) 00:07:35.624 8469.268 - 8519.680: 31.8326% ( 10) 00:07:35.624 8519.680 - 8570.092: 31.9297% ( 11) 00:07:35.624 8570.092 - 8620.505: 32.0268% ( 11) 00:07:35.624 8620.505 - 8670.917: 32.1769% ( 17) 00:07:35.624 8670.917 - 8721.329: 32.3799% ( 23) 00:07:35.624 8721.329 - 8771.742: 32.6095% ( 26) 00:07:35.624 8771.742 - 8822.154: 32.8037% ( 22) 00:07:35.624 8822.154 - 8872.566: 33.0597% ( 29) 00:07:35.624 8872.566 - 8922.978: 33.3598% ( 34) 00:07:35.624 8922.978 - 8973.391: 33.6246% ( 30) 00:07:35.624 8973.391 - 9023.803: 33.9954% ( 42) 00:07:35.624 9023.803 - 9074.215: 34.4368% ( 50) 00:07:35.624 9074.215 - 9124.628: 34.6575% ( 25) 00:07:35.624 9124.628 - 9175.040: 35.0371% ( 43) 00:07:35.624 9175.040 - 9225.452: 35.2843% ( 28) 00:07:35.624 9225.452 - 9275.865: 35.4785% ( 22) 00:07:35.624 9275.865 - 9326.277: 35.6462% ( 19) 00:07:35.624 9326.277 - 9376.689: 35.8404% ( 22) 00:07:35.624 9376.689 - 9427.102: 36.0258% ( 21) 00:07:35.624 9427.102 - 9477.514: 36.3612% ( 38) 00:07:35.624 9477.514 - 9527.926: 36.7143% ( 40) 00:07:35.624 9527.926 - 9578.338: 37.1734% ( 52) 00:07:35.624 9578.338 - 9628.751: 37.4823% ( 35) 00:07:35.624 9628.751 - 9679.163: 37.8090% ( 37) 00:07:35.624 9679.163 - 9729.575: 38.0915% ( 32) 00:07:35.624 9729.575 - 9779.988: 38.4446% ( 40) 00:07:35.624 9779.988 - 9830.400: 38.8595% ( 47) 00:07:35.624 9830.400 - 9880.812: 39.1949% ( 38) 00:07:35.624 9880.812 - 9931.225: 39.5922% ( 45) 00:07:35.624 9931.225 - 9981.637: 39.8482% ( 29) 00:07:35.624 9981.637 - 10032.049: 39.9982% ( 17) 00:07:35.624 10032.049 - 10082.462: 40.1042% ( 12) 00:07:35.624 10082.462 - 10132.874: 40.2366% ( 15) 00:07:35.624 10132.874 - 10183.286: 40.5102% ( 31) 00:07:35.624 10183.286 - 10233.698: 40.9693% ( 52) 00:07:35.624 10233.698 - 10284.111: 41.4901% ( 59) 00:07:35.624 10284.111 - 10334.523: 42.0727% ( 66) 00:07:35.624 10334.523 - 10384.935: 42.7083% ( 72) 00:07:35.624 10384.935 - 10435.348: 43.2027% ( 56) 00:07:35.624 10435.348 - 10485.760: 43.6529% ( 51) 00:07:35.624 10485.760 - 10536.172: 44.0678% ( 47) 00:07:35.624 10536.172 - 10586.585: 44.4915% ( 48) 00:07:35.624 10586.585 - 10636.997: 44.8976% ( 46) 00:07:35.624 10636.997 - 10687.409: 45.3125% ( 47) 00:07:35.624 10687.409 - 10737.822: 45.7715% ( 52) 00:07:35.624 10737.822 - 10788.234: 46.3542% ( 66) 00:07:35.624 10788.234 - 10838.646: 46.9015% ( 62) 00:07:35.624 10838.646 - 10889.058: 47.5989% ( 79) 00:07:35.624 10889.058 - 10939.471: 48.2256% ( 71) 00:07:35.624 10939.471 - 10989.883: 48.8524% ( 71) 00:07:35.624 10989.883 - 11040.295: 49.4880% ( 72) 00:07:35.624 11040.295 - 11090.708: 50.1236% ( 72) 00:07:35.624 11090.708 - 11141.120: 50.6356% ( 58) 00:07:35.624 11141.120 - 11191.532: 51.0858% ( 51) 00:07:35.625 11191.532 - 11241.945: 51.4477% ( 41) 00:07:35.625 11241.945 - 11292.357: 51.8803% ( 49) 00:07:35.625 11292.357 - 11342.769: 52.3393% ( 52) 00:07:35.625 11342.769 - 11393.182: 52.5953% ( 29) 00:07:35.625 11393.182 - 11443.594: 52.9308% ( 38) 00:07:35.625 11443.594 - 11494.006: 53.2662% ( 38) 00:07:35.625 11494.006 - 11544.418: 53.6105% ( 39) 00:07:35.625 11544.418 - 11594.831: 54.0872% ( 54) 00:07:35.625 11594.831 - 11645.243: 54.5109% ( 48) 00:07:35.625 11645.243 - 11695.655: 54.9788% ( 53) 00:07:35.625 11695.655 - 11746.068: 55.3672% ( 44) 00:07:35.625 11746.068 - 11796.480: 55.6939% ( 37) 00:07:35.625 11796.480 - 11846.892: 55.9057% ( 24) 00:07:35.625 11846.892 - 11897.305: 56.2323% ( 37) 00:07:35.625 11897.305 - 11947.717: 56.6649% ( 49) 00:07:35.625 11947.717 - 11998.129: 57.0710% ( 46) 00:07:35.625 11998.129 - 12048.542: 57.3358% ( 30) 00:07:35.625 12048.542 - 12098.954: 57.6095% ( 31) 00:07:35.625 12098.954 - 12149.366: 57.9802% ( 42) 00:07:35.625 12149.366 - 12199.778: 58.3245% ( 39) 00:07:35.625 12199.778 - 12250.191: 58.6335% ( 35) 00:07:35.625 12250.191 - 12300.603: 59.0660% ( 49) 00:07:35.625 12300.603 - 12351.015: 59.4368% ( 42) 00:07:35.625 12351.015 - 12401.428: 59.7811% ( 39) 00:07:35.625 12401.428 - 12451.840: 60.2489% ( 53) 00:07:35.625 12451.840 - 12502.252: 60.7874% ( 61) 00:07:35.625 12502.252 - 12552.665: 61.3701% ( 66) 00:07:35.625 12552.665 - 12603.077: 61.9703% ( 68) 00:07:35.625 12603.077 - 12653.489: 62.7295% ( 86) 00:07:35.625 12653.489 - 12703.902: 63.3210% ( 67) 00:07:35.625 12703.902 - 12754.314: 63.8595% ( 61) 00:07:35.625 12754.314 - 12804.726: 64.4421% ( 66) 00:07:35.625 12804.726 - 12855.138: 65.0689% ( 71) 00:07:35.625 12855.138 - 12905.551: 65.6868% ( 70) 00:07:35.625 12905.551 - 13006.375: 67.0109% ( 150) 00:07:35.625 13006.375 - 13107.200: 68.2910% ( 145) 00:07:35.625 13107.200 - 13208.025: 69.5180% ( 139) 00:07:35.625 13208.025 - 13308.849: 70.7274% ( 137) 00:07:35.625 13308.849 - 13409.674: 71.8397% ( 126) 00:07:35.625 13409.674 - 13510.498: 72.9255% ( 123) 00:07:35.625 13510.498 - 13611.323: 73.7994% ( 99) 00:07:35.625 13611.323 - 13712.148: 74.7617% ( 109) 00:07:35.625 13712.148 - 13812.972: 76.2094% ( 164) 00:07:35.625 13812.972 - 13913.797: 77.3570% ( 130) 00:07:35.625 13913.797 - 14014.622: 78.2486% ( 101) 00:07:35.625 14014.622 - 14115.446: 79.0784% ( 94) 00:07:35.625 14115.446 - 14216.271: 79.8641% ( 89) 00:07:35.625 14216.271 - 14317.095: 80.6939% ( 94) 00:07:35.625 14317.095 - 14417.920: 81.6031% ( 103) 00:07:35.625 14417.920 - 14518.745: 82.4153% ( 92) 00:07:35.625 14518.745 - 14619.569: 83.0067% ( 67) 00:07:35.625 14619.569 - 14720.394: 83.6158% ( 69) 00:07:35.625 14720.394 - 14821.218: 84.2073% ( 67) 00:07:35.625 14821.218 - 14922.043: 84.9576% ( 85) 00:07:35.625 14922.043 - 15022.868: 85.6285% ( 76) 00:07:35.625 15022.868 - 15123.692: 86.2376% ( 69) 00:07:35.625 15123.692 - 15224.517: 86.7761% ( 61) 00:07:35.625 15224.517 - 15325.342: 87.3588% ( 66) 00:07:35.625 15325.342 - 15426.166: 87.8266% ( 53) 00:07:35.625 15426.166 - 15526.991: 88.2239% ( 45) 00:07:35.625 15526.991 - 15627.815: 88.6829% ( 52) 00:07:35.625 15627.815 - 15728.640: 89.1684% ( 55) 00:07:35.625 15728.640 - 15829.465: 89.6451% ( 54) 00:07:35.625 15829.465 - 15930.289: 90.2101% ( 64) 00:07:35.625 15930.289 - 16031.114: 90.6868% ( 54) 00:07:35.625 16031.114 - 16131.938: 91.0664% ( 43) 00:07:35.625 16131.938 - 16232.763: 91.5166% ( 51) 00:07:35.625 16232.763 - 16333.588: 91.9492% ( 49) 00:07:35.625 16333.588 - 16434.412: 92.3376% ( 44) 00:07:35.625 16434.412 - 16535.237: 92.8054% ( 53) 00:07:35.625 16535.237 - 16636.062: 93.2027% ( 45) 00:07:35.625 16636.062 - 16736.886: 93.4763% ( 31) 00:07:35.625 16736.886 - 16837.711: 93.8206% ( 39) 00:07:35.625 16837.711 - 16938.535: 94.1826% ( 41) 00:07:35.625 16938.535 - 17039.360: 94.3591% ( 20) 00:07:35.625 17039.360 - 17140.185: 94.5445% ( 21) 00:07:35.625 17140.185 - 17241.009: 94.6681% ( 14) 00:07:35.625 17241.009 - 17341.834: 94.7740% ( 12) 00:07:35.625 17341.834 - 17442.658: 94.9064% ( 15) 00:07:35.625 17442.658 - 17543.483: 95.0653% ( 18) 00:07:35.625 17543.483 - 17644.308: 95.2066% ( 16) 00:07:35.625 17644.308 - 17745.132: 95.3125% ( 12) 00:07:35.625 17745.132 - 17845.957: 95.4008% ( 10) 00:07:35.625 17845.957 - 17946.782: 95.5244% ( 14) 00:07:35.625 17946.782 - 18047.606: 95.7274% ( 23) 00:07:35.625 18047.606 - 18148.431: 96.0275% ( 34) 00:07:35.625 18148.431 - 18249.255: 96.2747% ( 28) 00:07:35.625 18249.255 - 18350.080: 96.4866% ( 24) 00:07:35.625 18350.080 - 18450.905: 96.7073% ( 25) 00:07:35.625 18450.905 - 18551.729: 96.8838% ( 20) 00:07:35.625 18551.729 - 18652.554: 97.0957% ( 24) 00:07:35.625 18652.554 - 18753.378: 97.3958% ( 34) 00:07:35.625 18753.378 - 18854.203: 97.6342% ( 27) 00:07:35.625 18854.203 - 18955.028: 97.8372% ( 23) 00:07:35.625 18955.028 - 19055.852: 98.0138% ( 20) 00:07:35.625 19055.852 - 19156.677: 98.1727% ( 18) 00:07:35.625 19156.677 - 19257.502: 98.2786% ( 12) 00:07:35.625 19257.502 - 19358.326: 98.3492% ( 8) 00:07:35.625 19358.326 - 19459.151: 98.4552% ( 12) 00:07:35.625 19459.151 - 19559.975: 98.5523% ( 11) 00:07:35.625 19559.975 - 19660.800: 98.6317% ( 9) 00:07:35.625 19660.800 - 19761.625: 98.7288% ( 11) 00:07:35.625 19761.625 - 19862.449: 98.7818% ( 6) 00:07:35.625 19862.449 - 19963.274: 98.8171% ( 4) 00:07:35.625 19963.274 - 20064.098: 98.8612% ( 5) 00:07:35.625 20064.098 - 20164.923: 98.8701% ( 1) 00:07:35.625 25407.803 - 25508.628: 98.8965% ( 3) 00:07:35.625 25508.628 - 25609.452: 99.0113% ( 13) 00:07:35.625 25609.452 - 25710.277: 99.0466% ( 4) 00:07:35.625 25710.277 - 25811.102: 99.0819% ( 4) 00:07:35.625 25811.102 - 26012.751: 99.1084% ( 3) 00:07:35.625 26012.751 - 26214.400: 99.1790% ( 8) 00:07:35.625 26214.400 - 26416.049: 99.2496% ( 8) 00:07:35.625 26416.049 - 26617.698: 99.3203% ( 8) 00:07:35.625 26617.698 - 26819.348: 99.3821% ( 7) 00:07:35.625 26819.348 - 27020.997: 99.4350% ( 6) 00:07:35.625 30247.385 - 30449.034: 99.5056% ( 8) 00:07:35.625 30449.034 - 30650.683: 99.5763% ( 8) 00:07:35.625 30650.683 - 30852.332: 99.6381% ( 7) 00:07:35.625 30852.332 - 31053.982: 99.7087% ( 8) 00:07:35.625 31053.982 - 31255.631: 99.7705% ( 7) 00:07:35.625 31255.631 - 31457.280: 99.8411% ( 8) 00:07:35.625 31457.280 - 31658.929: 99.9029% ( 7) 00:07:35.625 31658.929 - 31860.578: 99.9382% ( 4) 00:07:35.625 31860.578 - 32062.228: 99.9647% ( 3) 00:07:35.625 32062.228 - 32263.877: 99.9912% ( 3) 00:07:35.625 32263.877 - 32465.526: 100.0000% ( 1) 00:07:35.625 00:07:35.625 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.625 ============================================================================== 00:07:35.625 Range in us Cumulative IO count 00:07:35.625 6150.302 - 6175.508: 0.0088% ( 1) 00:07:35.625 6251.126 - 6276.332: 0.0265% ( 2) 00:07:35.625 6276.332 - 6301.538: 0.0353% ( 1) 00:07:35.625 6301.538 - 6326.745: 0.0706% ( 4) 00:07:35.625 6326.745 - 6351.951: 0.1324% ( 7) 00:07:35.625 6351.951 - 6377.157: 0.1942% ( 7) 00:07:35.625 6377.157 - 6402.363: 0.3796% ( 21) 00:07:35.625 6402.363 - 6427.569: 0.7150% ( 38) 00:07:35.625 6427.569 - 6452.775: 1.0858% ( 42) 00:07:35.625 6452.775 - 6503.188: 2.1451% ( 120) 00:07:35.625 6503.188 - 6553.600: 3.5487% ( 159) 00:07:35.625 6553.600 - 6604.012: 4.8111% ( 143) 00:07:35.625 6604.012 - 6654.425: 5.5879% ( 88) 00:07:35.625 6654.425 - 6704.837: 6.7620% ( 133) 00:07:35.625 6704.837 - 6755.249: 7.3535% ( 67) 00:07:35.625 6755.249 - 6805.662: 7.8831% ( 60) 00:07:35.625 6805.662 - 6856.074: 8.9954% ( 126) 00:07:35.625 6856.074 - 6906.486: 10.2225% ( 139) 00:07:35.625 6906.486 - 6956.898: 11.3347% ( 126) 00:07:35.625 6956.898 - 7007.311: 12.5618% ( 139) 00:07:35.625 7007.311 - 7057.723: 13.6741% ( 126) 00:07:35.625 7057.723 - 7108.135: 14.5657% ( 101) 00:07:35.625 7108.135 - 7158.548: 15.4838% ( 104) 00:07:35.625 7158.548 - 7208.960: 16.0576% ( 65) 00:07:35.625 7208.960 - 7259.372: 16.3842% ( 37) 00:07:35.625 7259.372 - 7309.785: 16.6667% ( 32) 00:07:35.625 7309.785 - 7360.197: 16.9315% ( 30) 00:07:35.625 7360.197 - 7410.609: 17.4700% ( 61) 00:07:35.625 7410.609 - 7461.022: 18.2645% ( 90) 00:07:35.625 7461.022 - 7511.434: 19.5798% ( 149) 00:07:35.625 7511.434 - 7561.846: 20.8245% ( 141) 00:07:35.625 7561.846 - 7612.258: 21.7956% ( 110) 00:07:35.625 7612.258 - 7662.671: 23.1462% ( 153) 00:07:35.625 7662.671 - 7713.083: 24.3468% ( 136) 00:07:35.625 7713.083 - 7763.495: 25.1148% ( 87) 00:07:35.625 7763.495 - 7813.908: 26.1211% ( 114) 00:07:35.625 7813.908 - 7864.320: 26.8008% ( 77) 00:07:35.625 7864.320 - 7914.732: 27.4718% ( 76) 00:07:35.625 7914.732 - 7965.145: 28.0014% ( 60) 00:07:35.625 7965.145 - 8015.557: 28.7165% ( 81) 00:07:35.625 8015.557 - 8065.969: 29.1137% ( 45) 00:07:35.625 8065.969 - 8116.382: 29.3167% ( 23) 00:07:35.625 8116.382 - 8166.794: 29.5816% ( 30) 00:07:35.625 8166.794 - 8217.206: 29.8552% ( 31) 00:07:35.625 8217.206 - 8267.618: 30.3054% ( 51) 00:07:35.625 8267.618 - 8318.031: 30.4908% ( 21) 00:07:35.625 8318.031 - 8368.443: 30.5879% ( 11) 00:07:35.625 8368.443 - 8418.855: 30.7203% ( 15) 00:07:35.625 8418.855 - 8469.268: 30.8704% ( 17) 00:07:35.625 8469.268 - 8519.680: 31.0205% ( 17) 00:07:35.625 8519.680 - 8570.092: 31.4266% ( 46) 00:07:35.625 8570.092 - 8620.505: 31.6737% ( 28) 00:07:35.625 8620.505 - 8670.917: 31.9562% ( 32) 00:07:35.625 8670.917 - 8721.329: 32.5300% ( 65) 00:07:35.625 8721.329 - 8771.742: 32.8655% ( 38) 00:07:35.625 8771.742 - 8822.154: 33.1215% ( 29) 00:07:35.625 8822.154 - 8872.566: 33.2804% ( 18) 00:07:35.626 8872.566 - 8922.978: 33.4834% ( 23) 00:07:35.626 8922.978 - 8973.391: 33.7218% ( 27) 00:07:35.626 8973.391 - 9023.803: 34.0749% ( 40) 00:07:35.626 9023.803 - 9074.215: 34.4544% ( 43) 00:07:35.626 9074.215 - 9124.628: 34.7634% ( 35) 00:07:35.626 9124.628 - 9175.040: 35.1254% ( 41) 00:07:35.626 9175.040 - 9225.452: 35.6020% ( 54) 00:07:35.626 9225.452 - 9275.865: 35.8581% ( 29) 00:07:35.626 9275.865 - 9326.277: 36.0523% ( 22) 00:07:35.626 9326.277 - 9376.689: 36.2465% ( 22) 00:07:35.626 9376.689 - 9427.102: 36.4760% ( 26) 00:07:35.626 9427.102 - 9477.514: 36.7143% ( 27) 00:07:35.626 9477.514 - 9527.926: 36.8821% ( 19) 00:07:35.626 9527.926 - 9578.338: 37.1557% ( 31) 00:07:35.626 9578.338 - 9628.751: 37.5971% ( 50) 00:07:35.626 9628.751 - 9679.163: 37.9855% ( 44) 00:07:35.626 9679.163 - 9729.575: 38.3210% ( 38) 00:07:35.626 9729.575 - 9779.988: 38.6299% ( 35) 00:07:35.626 9779.988 - 9830.400: 39.0272% ( 45) 00:07:35.626 9830.400 - 9880.812: 39.2744% ( 28) 00:07:35.626 9880.812 - 9931.225: 39.4421% ( 19) 00:07:35.626 9931.225 - 9981.637: 39.6275% ( 21) 00:07:35.626 9981.637 - 10032.049: 39.8746% ( 28) 00:07:35.626 10032.049 - 10082.462: 40.2807% ( 46) 00:07:35.626 10082.462 - 10132.874: 40.7309% ( 51) 00:07:35.626 10132.874 - 10183.286: 41.2429% ( 58) 00:07:35.626 10183.286 - 10233.698: 41.5872% ( 39) 00:07:35.626 10233.698 - 10284.111: 41.8785% ( 33) 00:07:35.626 10284.111 - 10334.523: 42.1963% ( 36) 00:07:35.626 10334.523 - 10384.935: 42.7525% ( 63) 00:07:35.626 10384.935 - 10435.348: 43.1762% ( 48) 00:07:35.626 10435.348 - 10485.760: 43.7059% ( 60) 00:07:35.626 10485.760 - 10536.172: 44.2885% ( 66) 00:07:35.626 10536.172 - 10586.585: 44.8799% ( 67) 00:07:35.626 10586.585 - 10636.997: 45.5773% ( 79) 00:07:35.626 10636.997 - 10687.409: 46.1688% ( 67) 00:07:35.626 10687.409 - 10737.822: 47.1310% ( 109) 00:07:35.626 10737.822 - 10788.234: 47.7048% ( 65) 00:07:35.626 10788.234 - 10838.646: 48.4110% ( 80) 00:07:35.626 10838.646 - 10889.058: 48.9054% ( 56) 00:07:35.626 10889.058 - 10939.471: 49.4085% ( 57) 00:07:35.626 10939.471 - 10989.883: 49.8058% ( 45) 00:07:35.626 10989.883 - 11040.295: 50.1324% ( 37) 00:07:35.626 11040.295 - 11090.708: 50.4326% ( 34) 00:07:35.626 11090.708 - 11141.120: 50.7592% ( 37) 00:07:35.626 11141.120 - 11191.532: 51.1653% ( 46) 00:07:35.626 11191.532 - 11241.945: 51.4301% ( 30) 00:07:35.626 11241.945 - 11292.357: 51.6861% ( 29) 00:07:35.626 11292.357 - 11342.769: 51.9774% ( 33) 00:07:35.626 11342.769 - 11393.182: 52.2157% ( 27) 00:07:35.626 11393.182 - 11443.594: 52.5600% ( 39) 00:07:35.626 11443.594 - 11494.006: 52.8602% ( 34) 00:07:35.626 11494.006 - 11544.418: 53.1603% ( 34) 00:07:35.626 11544.418 - 11594.831: 53.6370% ( 54) 00:07:35.626 11594.831 - 11645.243: 54.0254% ( 44) 00:07:35.626 11645.243 - 11695.655: 54.2549% ( 26) 00:07:35.626 11695.655 - 11746.068: 54.5109% ( 29) 00:07:35.626 11746.068 - 11796.480: 54.8464% ( 38) 00:07:35.626 11796.480 - 11846.892: 55.1024% ( 29) 00:07:35.626 11846.892 - 11897.305: 55.4290% ( 37) 00:07:35.626 11897.305 - 11947.717: 55.7910% ( 41) 00:07:35.626 11947.717 - 11998.129: 56.1794% ( 44) 00:07:35.626 11998.129 - 12048.542: 56.6119% ( 49) 00:07:35.626 12048.542 - 12098.954: 57.0533% ( 50) 00:07:35.626 12098.954 - 12149.366: 57.5124% ( 52) 00:07:35.626 12149.366 - 12199.778: 58.0332% ( 59) 00:07:35.626 12199.778 - 12250.191: 58.7571% ( 82) 00:07:35.626 12250.191 - 12300.603: 59.4544% ( 79) 00:07:35.626 12300.603 - 12351.015: 60.0106% ( 63) 00:07:35.626 12351.015 - 12401.428: 60.5579% ( 62) 00:07:35.626 12401.428 - 12451.840: 61.2023% ( 73) 00:07:35.626 12451.840 - 12502.252: 61.9085% ( 80) 00:07:35.626 12502.252 - 12552.665: 62.4382% ( 60) 00:07:35.626 12552.665 - 12603.077: 62.8090% ( 42) 00:07:35.626 12603.077 - 12653.489: 63.0826% ( 31) 00:07:35.626 12653.489 - 12703.902: 63.4887% ( 46) 00:07:35.626 12703.902 - 12754.314: 63.8771% ( 44) 00:07:35.626 12754.314 - 12804.726: 64.3803% ( 57) 00:07:35.626 12804.726 - 12855.138: 64.9276% ( 62) 00:07:35.626 12855.138 - 12905.551: 65.5985% ( 76) 00:07:35.626 12905.551 - 13006.375: 66.9138% ( 149) 00:07:35.626 13006.375 - 13107.200: 68.0968% ( 134) 00:07:35.626 13107.200 - 13208.025: 69.6416% ( 175) 00:07:35.626 13208.025 - 13308.849: 70.7715% ( 128) 00:07:35.626 13308.849 - 13409.674: 71.8750% ( 125) 00:07:35.626 13409.674 - 13510.498: 72.8284% ( 108) 00:07:35.626 13510.498 - 13611.323: 73.7730% ( 107) 00:07:35.626 13611.323 - 13712.148: 74.5939% ( 93) 00:07:35.626 13712.148 - 13812.972: 75.1942% ( 68) 00:07:35.626 13812.972 - 13913.797: 75.9622% ( 87) 00:07:35.626 13913.797 - 14014.622: 76.8008% ( 95) 00:07:35.626 14014.622 - 14115.446: 77.5689% ( 87) 00:07:35.626 14115.446 - 14216.271: 78.5046% ( 106) 00:07:35.626 14216.271 - 14317.095: 79.5727% ( 121) 00:07:35.626 14317.095 - 14417.920: 80.9852% ( 160) 00:07:35.626 14417.920 - 14518.745: 82.2917% ( 148) 00:07:35.626 14518.745 - 14619.569: 83.3775% ( 123) 00:07:35.626 14619.569 - 14720.394: 84.3044% ( 105) 00:07:35.626 14720.394 - 14821.218: 85.0812% ( 88) 00:07:35.626 14821.218 - 14922.043: 85.8316% ( 85) 00:07:35.626 14922.043 - 15022.868: 86.4230% ( 67) 00:07:35.626 15022.868 - 15123.692: 86.8732% ( 51) 00:07:35.626 15123.692 - 15224.517: 87.4823% ( 69) 00:07:35.626 15224.517 - 15325.342: 88.0120% ( 60) 00:07:35.626 15325.342 - 15426.166: 88.4004% ( 44) 00:07:35.626 15426.166 - 15526.991: 88.7712% ( 42) 00:07:35.626 15526.991 - 15627.815: 89.0978% ( 37) 00:07:35.626 15627.815 - 15728.640: 89.4068% ( 35) 00:07:35.626 15728.640 - 15829.465: 89.6893% ( 32) 00:07:35.626 15829.465 - 15930.289: 89.9806% ( 33) 00:07:35.626 15930.289 - 16031.114: 90.2895% ( 35) 00:07:35.626 16031.114 - 16131.938: 90.6427% ( 40) 00:07:35.626 16131.938 - 16232.763: 90.9693% ( 37) 00:07:35.626 16232.763 - 16333.588: 91.3930% ( 48) 00:07:35.626 16333.588 - 16434.412: 91.8167% ( 48) 00:07:35.626 16434.412 - 16535.237: 92.3905% ( 65) 00:07:35.626 16535.237 - 16636.062: 92.7790% ( 44) 00:07:35.626 16636.062 - 16736.886: 93.3174% ( 61) 00:07:35.626 16736.886 - 16837.711: 93.6970% ( 43) 00:07:35.626 16837.711 - 16938.535: 93.9089% ( 24) 00:07:35.626 16938.535 - 17039.360: 94.2090% ( 34) 00:07:35.626 17039.360 - 17140.185: 94.4562% ( 28) 00:07:35.626 17140.185 - 17241.009: 94.6063% ( 17) 00:07:35.626 17241.009 - 17341.834: 94.8093% ( 23) 00:07:35.626 17341.834 - 17442.658: 94.9506% ( 16) 00:07:35.626 17442.658 - 17543.483: 95.3302% ( 43) 00:07:35.626 17543.483 - 17644.308: 95.4891% ( 18) 00:07:35.626 17644.308 - 17745.132: 95.6215% ( 15) 00:07:35.626 17745.132 - 17845.957: 95.7539% ( 15) 00:07:35.626 17845.957 - 17946.782: 95.9569% ( 23) 00:07:35.626 17946.782 - 18047.606: 96.0717% ( 13) 00:07:35.626 18047.606 - 18148.431: 96.1776% ( 12) 00:07:35.626 18148.431 - 18249.255: 96.3718% ( 22) 00:07:35.626 18249.255 - 18350.080: 96.6455% ( 31) 00:07:35.626 18350.080 - 18450.905: 96.8838% ( 27) 00:07:35.626 18450.905 - 18551.729: 97.0957% ( 24) 00:07:35.626 18551.729 - 18652.554: 97.2634% ( 19) 00:07:35.626 18652.554 - 18753.378: 97.4929% ( 26) 00:07:35.626 18753.378 - 18854.203: 97.7048% ( 24) 00:07:35.626 18854.203 - 18955.028: 97.8637% ( 18) 00:07:35.626 18955.028 - 19055.852: 98.0667% ( 23) 00:07:35.626 19055.852 - 19156.677: 98.2963% ( 26) 00:07:35.626 19156.677 - 19257.502: 98.4022% ( 12) 00:07:35.626 19257.502 - 19358.326: 98.4993% ( 11) 00:07:35.626 19358.326 - 19459.151: 98.5964% ( 11) 00:07:35.626 19459.151 - 19559.975: 98.6670% ( 8) 00:07:35.626 19559.975 - 19660.800: 98.7200% ( 6) 00:07:35.626 19660.800 - 19761.625: 98.7818% ( 7) 00:07:35.626 19761.625 - 19862.449: 98.8347% ( 6) 00:07:35.626 19862.449 - 19963.274: 98.8701% ( 4) 00:07:35.626 23592.960 - 23693.785: 98.8965% ( 3) 00:07:35.626 23693.785 - 23794.609: 98.9230% ( 3) 00:07:35.626 23794.609 - 23895.434: 98.9583% ( 4) 00:07:35.626 23895.434 - 23996.258: 98.9848% ( 3) 00:07:35.626 23996.258 - 24097.083: 99.0201% ( 4) 00:07:35.626 24097.083 - 24197.908: 99.0554% ( 4) 00:07:35.626 24197.908 - 24298.732: 99.0907% ( 4) 00:07:35.626 24298.732 - 24399.557: 99.1261% ( 4) 00:07:35.626 24399.557 - 24500.382: 99.1614% ( 4) 00:07:35.626 24500.382 - 24601.206: 99.1967% ( 4) 00:07:35.626 24601.206 - 24702.031: 99.2320% ( 4) 00:07:35.626 24702.031 - 24802.855: 99.2673% ( 4) 00:07:35.626 24802.855 - 24903.680: 99.3026% ( 4) 00:07:35.626 24903.680 - 25004.505: 99.3379% ( 4) 00:07:35.626 25004.505 - 25105.329: 99.3732% ( 4) 00:07:35.626 25105.329 - 25206.154: 99.4085% ( 4) 00:07:35.626 25206.154 - 25306.978: 99.4350% ( 3) 00:07:35.626 28432.542 - 28634.191: 99.4703% ( 4) 00:07:35.626 28634.191 - 28835.840: 99.5410% ( 8) 00:07:35.626 28835.840 - 29037.489: 99.6028% ( 7) 00:07:35.626 29037.489 - 29239.138: 99.6734% ( 8) 00:07:35.626 29239.138 - 29440.788: 99.7440% ( 8) 00:07:35.626 29440.788 - 29642.437: 99.8146% ( 8) 00:07:35.626 29642.437 - 29844.086: 99.8941% ( 9) 00:07:35.626 29844.086 - 30045.735: 99.9735% ( 9) 00:07:35.626 30045.735 - 30247.385: 100.0000% ( 3) 00:07:35.626 00:07:35.626 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.626 ============================================================================== 00:07:35.626 Range in us Cumulative IO count 00:07:35.626 5948.652 - 5973.858: 0.0088% ( 1) 00:07:35.626 6024.271 - 6049.477: 0.0265% ( 2) 00:07:35.626 6099.889 - 6125.095: 0.0353% ( 1) 00:07:35.626 6125.095 - 6150.302: 0.0441% ( 1) 00:07:35.626 6200.714 - 6225.920: 0.0618% ( 2) 00:07:35.626 6225.920 - 6251.126: 0.1059% ( 5) 00:07:35.627 6251.126 - 6276.332: 0.2119% ( 12) 00:07:35.627 6276.332 - 6301.538: 0.3090% ( 11) 00:07:35.627 6301.538 - 6326.745: 0.4061% ( 11) 00:07:35.627 6326.745 - 6351.951: 0.5826% ( 20) 00:07:35.627 6351.951 - 6377.157: 0.7768% ( 22) 00:07:35.627 6377.157 - 6402.363: 1.2359% ( 52) 00:07:35.627 6402.363 - 6427.569: 1.6331% ( 45) 00:07:35.627 6427.569 - 6452.775: 2.1010% ( 53) 00:07:35.627 6452.775 - 6503.188: 3.0544% ( 108) 00:07:35.627 6503.188 - 6553.600: 4.2461% ( 135) 00:07:35.627 6553.600 - 6604.012: 5.2525% ( 114) 00:07:35.627 6604.012 - 6654.425: 6.0117% ( 86) 00:07:35.627 6654.425 - 6704.837: 6.6208% ( 69) 00:07:35.627 6704.837 - 6755.249: 7.5035% ( 100) 00:07:35.627 6755.249 - 6805.662: 8.2627% ( 86) 00:07:35.627 6805.662 - 6856.074: 9.2338% ( 110) 00:07:35.627 6856.074 - 6906.486: 10.2666% ( 117) 00:07:35.627 6906.486 - 6956.898: 11.3524% ( 123) 00:07:35.627 6956.898 - 7007.311: 12.1469% ( 90) 00:07:35.627 7007.311 - 7057.723: 12.7913% ( 73) 00:07:35.627 7057.723 - 7108.135: 13.2945% ( 57) 00:07:35.627 7108.135 - 7158.548: 13.8595% ( 64) 00:07:35.627 7158.548 - 7208.960: 14.4951% ( 72) 00:07:35.627 7208.960 - 7259.372: 15.3160% ( 93) 00:07:35.627 7259.372 - 7309.785: 16.1547% ( 95) 00:07:35.627 7309.785 - 7360.197: 17.5406% ( 157) 00:07:35.627 7360.197 - 7410.609: 18.5293% ( 112) 00:07:35.627 7410.609 - 7461.022: 19.2885% ( 86) 00:07:35.627 7461.022 - 7511.434: 20.8863% ( 181) 00:07:35.627 7511.434 - 7561.846: 21.9721% ( 123) 00:07:35.627 7561.846 - 7612.258: 22.8549% ( 100) 00:07:35.627 7612.258 - 7662.671: 23.6758% ( 93) 00:07:35.627 7662.671 - 7713.083: 24.8323% ( 131) 00:07:35.627 7713.083 - 7763.495: 25.4767% ( 73) 00:07:35.627 7763.495 - 7813.908: 25.9093% ( 49) 00:07:35.627 7813.908 - 7864.320: 26.3771% ( 53) 00:07:35.627 7864.320 - 7914.732: 26.7655% ( 44) 00:07:35.627 7914.732 - 7965.145: 27.1010% ( 38) 00:07:35.627 7965.145 - 8015.557: 27.3746% ( 31) 00:07:35.627 8015.557 - 8065.969: 27.7101% ( 38) 00:07:35.627 8065.969 - 8116.382: 27.8778% ( 19) 00:07:35.627 8116.382 - 8166.794: 28.1162% ( 27) 00:07:35.627 8166.794 - 8217.206: 28.4605% ( 39) 00:07:35.627 8217.206 - 8267.618: 28.8577% ( 45) 00:07:35.627 8267.618 - 8318.031: 29.3962% ( 61) 00:07:35.627 8318.031 - 8368.443: 29.9788% ( 66) 00:07:35.627 8368.443 - 8418.855: 30.3672% ( 44) 00:07:35.627 8418.855 - 8469.268: 30.9145% ( 62) 00:07:35.627 8469.268 - 8519.680: 31.3912% ( 54) 00:07:35.627 8519.680 - 8570.092: 31.8061% ( 47) 00:07:35.627 8570.092 - 8620.505: 32.1151% ( 35) 00:07:35.627 8620.505 - 8670.917: 32.6536% ( 61) 00:07:35.627 8670.917 - 8721.329: 32.9891% ( 38) 00:07:35.627 8721.329 - 8771.742: 33.2274% ( 27) 00:07:35.627 8771.742 - 8822.154: 33.4481% ( 25) 00:07:35.627 8822.154 - 8872.566: 33.7041% ( 29) 00:07:35.627 8872.566 - 8922.978: 33.9689% ( 30) 00:07:35.627 8922.978 - 8973.391: 34.1808% ( 24) 00:07:35.627 8973.391 - 9023.803: 34.5074% ( 37) 00:07:35.627 9023.803 - 9074.215: 34.6398% ( 15) 00:07:35.627 9074.215 - 9124.628: 34.8870% ( 28) 00:07:35.627 9124.628 - 9175.040: 35.0900% ( 23) 00:07:35.627 9175.040 - 9225.452: 35.3284% ( 27) 00:07:35.627 9225.452 - 9275.865: 35.7874% ( 52) 00:07:35.627 9275.865 - 9326.277: 36.1405% ( 40) 00:07:35.627 9326.277 - 9376.689: 36.5554% ( 47) 00:07:35.627 9376.689 - 9427.102: 36.8291% ( 31) 00:07:35.627 9427.102 - 9477.514: 37.1822% ( 40) 00:07:35.627 9477.514 - 9527.926: 37.5530% ( 42) 00:07:35.627 9527.926 - 9578.338: 37.8443% ( 33) 00:07:35.627 9578.338 - 9628.751: 38.1356% ( 33) 00:07:35.627 9628.751 - 9679.163: 38.3298% ( 22) 00:07:35.627 9679.163 - 9729.575: 38.5681% ( 27) 00:07:35.627 9729.575 - 9779.988: 38.8948% ( 37) 00:07:35.627 9779.988 - 9830.400: 39.1596% ( 30) 00:07:35.627 9830.400 - 9880.812: 39.5480% ( 44) 00:07:35.627 9880.812 - 9931.225: 39.9718% ( 48) 00:07:35.627 9931.225 - 9981.637: 40.3160% ( 39) 00:07:35.627 9981.637 - 10032.049: 40.7221% ( 46) 00:07:35.627 10032.049 - 10082.462: 41.1547% ( 49) 00:07:35.627 10082.462 - 10132.874: 41.4548% ( 34) 00:07:35.627 10132.874 - 10183.286: 41.7549% ( 34) 00:07:35.627 10183.286 - 10233.698: 41.9933% ( 27) 00:07:35.627 10233.698 - 10284.111: 42.4170% ( 48) 00:07:35.627 10284.111 - 10334.523: 42.8672% ( 51) 00:07:35.627 10334.523 - 10384.935: 43.2115% ( 39) 00:07:35.627 10384.935 - 10435.348: 43.9001% ( 78) 00:07:35.627 10435.348 - 10485.760: 44.4915% ( 67) 00:07:35.627 10485.760 - 10536.172: 44.9594% ( 53) 00:07:35.627 10536.172 - 10586.585: 45.3302% ( 42) 00:07:35.627 10586.585 - 10636.997: 45.7980% ( 53) 00:07:35.627 10636.997 - 10687.409: 46.1335% ( 38) 00:07:35.627 10687.409 - 10737.822: 46.4778% ( 39) 00:07:35.627 10737.822 - 10788.234: 46.7779% ( 34) 00:07:35.627 10788.234 - 10838.646: 47.0869% ( 35) 00:07:35.627 10838.646 - 10889.058: 47.4400% ( 40) 00:07:35.627 10889.058 - 10939.471: 47.8107% ( 42) 00:07:35.627 10939.471 - 10989.883: 48.2080% ( 45) 00:07:35.627 10989.883 - 11040.295: 48.5964% ( 44) 00:07:35.627 11040.295 - 11090.708: 48.9142% ( 36) 00:07:35.627 11090.708 - 11141.120: 49.3114% ( 45) 00:07:35.627 11141.120 - 11191.532: 49.8676% ( 63) 00:07:35.627 11191.532 - 11241.945: 50.2383% ( 42) 00:07:35.627 11241.945 - 11292.357: 50.7327% ( 56) 00:07:35.627 11292.357 - 11342.769: 51.2094% ( 54) 00:07:35.627 11342.769 - 11393.182: 51.5890% ( 43) 00:07:35.627 11393.182 - 11443.594: 51.9862% ( 45) 00:07:35.627 11443.594 - 11494.006: 52.5335% ( 62) 00:07:35.627 11494.006 - 11544.418: 52.9131% ( 43) 00:07:35.627 11544.418 - 11594.831: 53.3016% ( 44) 00:07:35.627 11594.831 - 11645.243: 53.6635% ( 41) 00:07:35.627 11645.243 - 11695.655: 54.1402% ( 54) 00:07:35.627 11695.655 - 11746.068: 54.6345% ( 56) 00:07:35.627 11746.068 - 11796.480: 55.0053% ( 42) 00:07:35.627 11796.480 - 11846.892: 55.3937% ( 44) 00:07:35.627 11846.892 - 11897.305: 55.9499% ( 63) 00:07:35.627 11897.305 - 11947.717: 56.3471% ( 45) 00:07:35.627 11947.717 - 11998.129: 56.7179% ( 42) 00:07:35.627 11998.129 - 12048.542: 57.1769% ( 52) 00:07:35.627 12048.542 - 12098.954: 57.5918% ( 47) 00:07:35.627 12098.954 - 12149.366: 58.0332% ( 50) 00:07:35.627 12149.366 - 12199.778: 58.4834% ( 51) 00:07:35.627 12199.778 - 12250.191: 58.8806% ( 45) 00:07:35.627 12250.191 - 12300.603: 59.2602% ( 43) 00:07:35.627 12300.603 - 12351.015: 59.5869% ( 37) 00:07:35.627 12351.015 - 12401.428: 59.9488% ( 41) 00:07:35.627 12401.428 - 12451.840: 60.4608% ( 58) 00:07:35.627 12451.840 - 12502.252: 60.8669% ( 46) 00:07:35.627 12502.252 - 12552.665: 61.3436% ( 54) 00:07:35.627 12552.665 - 12603.077: 61.8291% ( 55) 00:07:35.627 12603.077 - 12653.489: 62.4117% ( 66) 00:07:35.627 12653.489 - 12703.902: 63.0826% ( 76) 00:07:35.627 12703.902 - 12754.314: 63.6388% ( 63) 00:07:35.627 12754.314 - 12804.726: 64.1949% ( 63) 00:07:35.627 12804.726 - 12855.138: 64.9364% ( 84) 00:07:35.627 12855.138 - 12905.551: 65.5367% ( 68) 00:07:35.627 12905.551 - 13006.375: 66.6137% ( 122) 00:07:35.627 13006.375 - 13107.200: 67.6554% ( 118) 00:07:35.627 13107.200 - 13208.025: 69.0148% ( 154) 00:07:35.627 13208.025 - 13308.849: 70.1536% ( 129) 00:07:35.627 13308.849 - 13409.674: 71.3718% ( 138) 00:07:35.627 13409.674 - 13510.498: 72.6607% ( 146) 00:07:35.627 13510.498 - 13611.323: 73.6935% ( 117) 00:07:35.627 13611.323 - 13712.148: 74.7617% ( 121) 00:07:35.627 13712.148 - 13812.972: 75.8739% ( 126) 00:07:35.627 13812.972 - 13913.797: 76.9509% ( 122) 00:07:35.627 13913.797 - 14014.622: 77.9661% ( 115) 00:07:35.627 14014.622 - 14115.446: 78.7606% ( 90) 00:07:35.627 14115.446 - 14216.271: 79.7493% ( 112) 00:07:35.627 14216.271 - 14317.095: 80.6497% ( 102) 00:07:35.627 14317.095 - 14417.920: 81.3383% ( 78) 00:07:35.627 14417.920 - 14518.745: 82.1239% ( 89) 00:07:35.627 14518.745 - 14619.569: 82.8831% ( 86) 00:07:35.627 14619.569 - 14720.394: 83.7306% ( 96) 00:07:35.627 14720.394 - 14821.218: 84.4809% ( 85) 00:07:35.627 14821.218 - 14922.043: 85.0900% ( 69) 00:07:35.627 14922.043 - 15022.868: 85.6638% ( 65) 00:07:35.627 15022.868 - 15123.692: 86.2112% ( 62) 00:07:35.627 15123.692 - 15224.517: 86.7585% ( 62) 00:07:35.627 15224.517 - 15325.342: 87.3058% ( 62) 00:07:35.627 15325.342 - 15426.166: 87.8619% ( 63) 00:07:35.627 15426.166 - 15526.991: 88.4269% ( 64) 00:07:35.627 15526.991 - 15627.815: 88.8948% ( 53) 00:07:35.627 15627.815 - 15728.640: 89.4597% ( 64) 00:07:35.627 15728.640 - 15829.465: 89.8835% ( 48) 00:07:35.627 15829.465 - 15930.289: 90.1924% ( 35) 00:07:35.627 15930.289 - 16031.114: 90.4131% ( 25) 00:07:35.627 16031.114 - 16131.938: 90.7044% ( 33) 00:07:35.627 16131.938 - 16232.763: 91.0046% ( 34) 00:07:35.627 16232.763 - 16333.588: 91.3224% ( 36) 00:07:35.627 16333.588 - 16434.412: 91.6225% ( 34) 00:07:35.627 16434.412 - 16535.237: 91.9492% ( 37) 00:07:35.627 16535.237 - 16636.062: 92.2758% ( 37) 00:07:35.627 16636.062 - 16736.886: 92.5494% ( 31) 00:07:35.627 16736.886 - 16837.711: 92.7966% ( 28) 00:07:35.627 16837.711 - 16938.535: 92.9996% ( 23) 00:07:35.627 16938.535 - 17039.360: 93.2645% ( 30) 00:07:35.627 17039.360 - 17140.185: 93.6176% ( 40) 00:07:35.627 17140.185 - 17241.009: 94.0237% ( 46) 00:07:35.627 17241.009 - 17341.834: 94.4474% ( 48) 00:07:35.627 17341.834 - 17442.658: 94.9594% ( 58) 00:07:35.627 17442.658 - 17543.483: 95.4537% ( 56) 00:07:35.627 17543.483 - 17644.308: 96.0982% ( 73) 00:07:35.627 17644.308 - 17745.132: 96.5307% ( 49) 00:07:35.627 17745.132 - 17845.957: 97.0339% ( 57) 00:07:35.627 17845.957 - 17946.782: 97.2722% ( 27) 00:07:35.627 17946.782 - 18047.606: 97.4400% ( 19) 00:07:35.627 18047.606 - 18148.431: 97.5900% ( 17) 00:07:35.628 18148.431 - 18249.255: 97.6871% ( 11) 00:07:35.628 18249.255 - 18350.080: 97.7313% ( 5) 00:07:35.628 18350.080 - 18450.905: 97.7578% ( 3) 00:07:35.628 18450.905 - 18551.729: 97.8284% ( 8) 00:07:35.628 18551.729 - 18652.554: 97.8725% ( 5) 00:07:35.628 18652.554 - 18753.378: 97.9961% ( 14) 00:07:35.628 18753.378 - 18854.203: 98.1374% ( 16) 00:07:35.628 18854.203 - 18955.028: 98.2256% ( 10) 00:07:35.628 18955.028 - 19055.852: 98.3227% ( 11) 00:07:35.628 19055.852 - 19156.677: 98.4287% ( 12) 00:07:35.628 19156.677 - 19257.502: 98.5346% ( 12) 00:07:35.628 19257.502 - 19358.326: 98.6405% ( 12) 00:07:35.628 19358.326 - 19459.151: 98.7553% ( 13) 00:07:35.628 19459.151 - 19559.975: 98.8171% ( 7) 00:07:35.628 19559.975 - 19660.800: 98.8612% ( 5) 00:07:35.628 19660.800 - 19761.625: 98.8701% ( 1) 00:07:35.628 21979.766 - 22080.591: 98.9054% ( 4) 00:07:35.628 22080.591 - 22181.415: 98.9407% ( 4) 00:07:35.628 22181.415 - 22282.240: 98.9760% ( 4) 00:07:35.628 22282.240 - 22383.065: 99.0113% ( 4) 00:07:35.628 22383.065 - 22483.889: 99.0466% ( 4) 00:07:35.628 22483.889 - 22584.714: 99.0819% ( 4) 00:07:35.628 22584.714 - 22685.538: 99.1084% ( 3) 00:07:35.628 22685.538 - 22786.363: 99.1437% ( 4) 00:07:35.628 22786.363 - 22887.188: 99.1790% ( 4) 00:07:35.628 22887.188 - 22988.012: 99.2143% ( 4) 00:07:35.628 22988.012 - 23088.837: 99.2496% ( 4) 00:07:35.628 23088.837 - 23189.662: 99.2761% ( 3) 00:07:35.628 23189.662 - 23290.486: 99.3114% ( 4) 00:07:35.628 23290.486 - 23391.311: 99.3468% ( 4) 00:07:35.628 23391.311 - 23492.135: 99.3821% ( 4) 00:07:35.628 23492.135 - 23592.960: 99.4174% ( 4) 00:07:35.628 23592.960 - 23693.785: 99.4350% ( 2) 00:07:35.628 27222.646 - 27424.295: 99.4880% ( 6) 00:07:35.628 27424.295 - 27625.945: 99.5586% ( 8) 00:07:35.628 27625.945 - 27827.594: 99.6292% ( 8) 00:07:35.628 27827.594 - 28029.243: 99.6910% ( 7) 00:07:35.628 28029.243 - 28230.892: 99.7617% ( 8) 00:07:35.628 28230.892 - 28432.542: 99.8323% ( 8) 00:07:35.628 28432.542 - 28634.191: 99.9029% ( 8) 00:07:35.628 28634.191 - 28835.840: 99.9735% ( 8) 00:07:35.628 28835.840 - 29037.489: 100.0000% ( 3) 00:07:35.628 00:07:35.628 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.628 ============================================================================== 00:07:35.628 Range in us Cumulative IO count 00:07:35.628 5948.652 - 5973.858: 0.0088% ( 1) 00:07:35.628 5999.065 - 6024.271: 0.0177% ( 1) 00:07:35.628 6074.683 - 6099.889: 0.0265% ( 1) 00:07:35.628 6099.889 - 6125.095: 0.0353% ( 1) 00:07:35.628 6200.714 - 6225.920: 0.0441% ( 1) 00:07:35.628 6225.920 - 6251.126: 0.0971% ( 6) 00:07:35.628 6251.126 - 6276.332: 0.1589% ( 7) 00:07:35.628 6276.332 - 6301.538: 0.2207% ( 7) 00:07:35.628 6301.538 - 6326.745: 0.3090% ( 10) 00:07:35.628 6326.745 - 6351.951: 0.4061% ( 11) 00:07:35.628 6351.951 - 6377.157: 0.5561% ( 17) 00:07:35.628 6377.157 - 6402.363: 1.0770% ( 59) 00:07:35.628 6402.363 - 6427.569: 1.2888% ( 24) 00:07:35.628 6427.569 - 6452.775: 1.5272% ( 27) 00:07:35.628 6452.775 - 6503.188: 2.3393% ( 92) 00:07:35.628 6503.188 - 6553.600: 3.4516% ( 126) 00:07:35.628 6553.600 - 6604.012: 4.6257% ( 133) 00:07:35.628 6604.012 - 6654.425: 5.4379% ( 92) 00:07:35.628 6654.425 - 6704.837: 6.4266% ( 112) 00:07:35.628 6704.837 - 6755.249: 7.2475% ( 93) 00:07:35.628 6755.249 - 6805.662: 8.2892% ( 118) 00:07:35.628 6805.662 - 6856.074: 9.2514% ( 109) 00:07:35.628 6856.074 - 6906.486: 10.4167% ( 132) 00:07:35.628 6906.486 - 6956.898: 11.3259% ( 103) 00:07:35.628 6956.898 - 7007.311: 12.4559% ( 128) 00:07:35.628 7007.311 - 7057.723: 13.5328% ( 122) 00:07:35.628 7057.723 - 7108.135: 14.5922% ( 120) 00:07:35.628 7108.135 - 7158.548: 15.7574% ( 132) 00:07:35.628 7158.548 - 7208.960: 16.6931% ( 106) 00:07:35.628 7208.960 - 7259.372: 17.4258% ( 83) 00:07:35.628 7259.372 - 7309.785: 18.2027% ( 88) 00:07:35.628 7309.785 - 7360.197: 19.0325% ( 94) 00:07:35.628 7360.197 - 7410.609: 19.5445% ( 58) 00:07:35.628 7410.609 - 7461.022: 20.1095% ( 64) 00:07:35.628 7461.022 - 7511.434: 20.7539% ( 73) 00:07:35.628 7511.434 - 7561.846: 21.3277% ( 65) 00:07:35.628 7561.846 - 7612.258: 22.2193% ( 101) 00:07:35.628 7612.258 - 7662.671: 23.2609% ( 118) 00:07:35.628 7662.671 - 7713.083: 24.3114% ( 119) 00:07:35.628 7713.083 - 7763.495: 24.9823% ( 76) 00:07:35.628 7763.495 - 7813.908: 26.0505% ( 121) 00:07:35.628 7813.908 - 7864.320: 26.6773% ( 71) 00:07:35.628 7864.320 - 7914.732: 27.0480% ( 42) 00:07:35.628 7914.732 - 7965.145: 27.3305% ( 32) 00:07:35.628 7965.145 - 8015.557: 27.5424% ( 24) 00:07:35.628 8015.557 - 8065.969: 27.7631% ( 25) 00:07:35.628 8065.969 - 8116.382: 28.1073% ( 39) 00:07:35.628 8116.382 - 8166.794: 28.4781% ( 42) 00:07:35.628 8166.794 - 8217.206: 28.8312% ( 40) 00:07:35.628 8217.206 - 8267.618: 29.1667% ( 38) 00:07:35.628 8267.618 - 8318.031: 29.5463% ( 43) 00:07:35.628 8318.031 - 8368.443: 29.9258% ( 43) 00:07:35.628 8368.443 - 8418.855: 30.2790% ( 40) 00:07:35.628 8418.855 - 8469.268: 30.4908% ( 24) 00:07:35.628 8469.268 - 8519.680: 30.7910% ( 34) 00:07:35.628 8519.680 - 8570.092: 30.9852% ( 22) 00:07:35.628 8570.092 - 8620.505: 31.2147% ( 26) 00:07:35.628 8620.505 - 8670.917: 31.5148% ( 34) 00:07:35.628 8670.917 - 8721.329: 31.9739% ( 52) 00:07:35.628 8721.329 - 8771.742: 32.4682% ( 56) 00:07:35.628 8771.742 - 8822.154: 32.6624% ( 22) 00:07:35.628 8822.154 - 8872.566: 32.8478% ( 21) 00:07:35.628 8872.566 - 8922.978: 33.2274% ( 43) 00:07:35.628 8922.978 - 8973.391: 33.5540% ( 37) 00:07:35.628 8973.391 - 9023.803: 33.8277% ( 31) 00:07:35.628 9023.803 - 9074.215: 34.1455% ( 36) 00:07:35.628 9074.215 - 9124.628: 34.6045% ( 52) 00:07:35.628 9124.628 - 9175.040: 34.9311% ( 37) 00:07:35.628 9175.040 - 9225.452: 35.1960% ( 30) 00:07:35.628 9225.452 - 9275.865: 35.4431% ( 28) 00:07:35.628 9275.865 - 9326.277: 35.7080% ( 30) 00:07:35.628 9326.277 - 9376.689: 36.0611% ( 40) 00:07:35.628 9376.689 - 9427.102: 36.7232% ( 75) 00:07:35.628 9427.102 - 9477.514: 37.1204% ( 45) 00:07:35.628 9477.514 - 9527.926: 37.4382% ( 36) 00:07:35.628 9527.926 - 9578.338: 37.8178% ( 43) 00:07:35.628 9578.338 - 9628.751: 38.2504% ( 49) 00:07:35.628 9628.751 - 9679.163: 38.6653% ( 47) 00:07:35.628 9679.163 - 9729.575: 39.0360% ( 42) 00:07:35.628 9729.575 - 9779.988: 39.3097% ( 31) 00:07:35.628 9779.988 - 9830.400: 39.5039% ( 22) 00:07:35.628 9830.400 - 9880.812: 39.6804% ( 20) 00:07:35.628 9880.812 - 9931.225: 39.8835% ( 23) 00:07:35.628 9931.225 - 9981.637: 40.1042% ( 25) 00:07:35.628 9981.637 - 10032.049: 40.3778% ( 31) 00:07:35.628 10032.049 - 10082.462: 40.7044% ( 37) 00:07:35.628 10082.462 - 10132.874: 41.0311% ( 37) 00:07:35.628 10132.874 - 10183.286: 41.4636% ( 49) 00:07:35.628 10183.286 - 10233.698: 41.9492% ( 55) 00:07:35.628 10233.698 - 10284.111: 42.3994% ( 51) 00:07:35.628 10284.111 - 10334.523: 42.7966% ( 45) 00:07:35.628 10334.523 - 10384.935: 43.1321% ( 38) 00:07:35.628 10384.935 - 10435.348: 43.5028% ( 42) 00:07:35.628 10435.348 - 10485.760: 44.0590% ( 63) 00:07:35.628 10485.760 - 10536.172: 44.4297% ( 42) 00:07:35.628 10536.172 - 10586.585: 44.7475% ( 36) 00:07:35.628 10586.585 - 10636.997: 45.2331% ( 55) 00:07:35.628 10636.997 - 10687.409: 45.5950% ( 41) 00:07:35.628 10687.409 - 10737.822: 46.1953% ( 68) 00:07:35.628 10737.822 - 10788.234: 46.4954% ( 34) 00:07:35.628 10788.234 - 10838.646: 46.7779% ( 32) 00:07:35.628 10838.646 - 10889.058: 47.0162% ( 27) 00:07:35.628 10889.058 - 10939.471: 47.3076% ( 33) 00:07:35.628 10939.471 - 10989.883: 47.5724% ( 30) 00:07:35.628 10989.883 - 11040.295: 47.8107% ( 27) 00:07:35.628 11040.295 - 11090.708: 48.0403% ( 26) 00:07:35.628 11090.708 - 11141.120: 48.2521% ( 24) 00:07:35.628 11141.120 - 11191.532: 48.5787% ( 37) 00:07:35.629 11191.532 - 11241.945: 48.8965% ( 36) 00:07:35.629 11241.945 - 11292.357: 49.1702% ( 31) 00:07:35.629 11292.357 - 11342.769: 49.6734% ( 57) 00:07:35.629 11342.769 - 11393.182: 50.3090% ( 72) 00:07:35.629 11393.182 - 11443.594: 50.8298% ( 59) 00:07:35.629 11443.594 - 11494.006: 51.3418% ( 58) 00:07:35.629 11494.006 - 11544.418: 51.8450% ( 57) 00:07:35.629 11544.418 - 11594.831: 52.4718% ( 71) 00:07:35.629 11594.831 - 11645.243: 53.1868% ( 81) 00:07:35.629 11645.243 - 11695.655: 53.6282% ( 50) 00:07:35.629 11695.655 - 11746.068: 53.9989% ( 42) 00:07:35.629 11746.068 - 11796.480: 54.4227% ( 48) 00:07:35.629 11796.480 - 11846.892: 54.8376% ( 47) 00:07:35.629 11846.892 - 11897.305: 55.2878% ( 51) 00:07:35.629 11897.305 - 11947.717: 55.7733% ( 55) 00:07:35.629 11947.717 - 11998.129: 56.2412% ( 53) 00:07:35.629 11998.129 - 12048.542: 56.7355% ( 56) 00:07:35.629 12048.542 - 12098.954: 57.2122% ( 54) 00:07:35.629 12098.954 - 12149.366: 57.5035% ( 33) 00:07:35.629 12149.366 - 12199.778: 57.7860% ( 32) 00:07:35.629 12199.778 - 12250.191: 58.0950% ( 35) 00:07:35.629 12250.191 - 12300.603: 58.4481% ( 40) 00:07:35.629 12300.603 - 12351.015: 58.8100% ( 41) 00:07:35.629 12351.015 - 12401.428: 59.2073% ( 45) 00:07:35.629 12401.428 - 12451.840: 59.6575% ( 51) 00:07:35.629 12451.840 - 12502.252: 60.3814% ( 82) 00:07:35.629 12502.252 - 12552.665: 61.0876% ( 80) 00:07:35.629 12552.665 - 12603.077: 61.6437% ( 63) 00:07:35.629 12603.077 - 12653.489: 62.3146% ( 76) 00:07:35.629 12653.489 - 12703.902: 62.7472% ( 49) 00:07:35.629 12703.902 - 12754.314: 63.1974% ( 51) 00:07:35.629 12754.314 - 12804.726: 63.7359% ( 61) 00:07:35.629 12804.726 - 12855.138: 64.2920% ( 63) 00:07:35.629 12855.138 - 12905.551: 64.8129% ( 59) 00:07:35.629 12905.551 - 13006.375: 65.9251% ( 126) 00:07:35.629 13006.375 - 13107.200: 67.2669% ( 152) 00:07:35.629 13107.200 - 13208.025: 68.7500% ( 168) 00:07:35.629 13208.025 - 13308.849: 70.2154% ( 166) 00:07:35.629 13308.849 - 13409.674: 71.8397% ( 184) 00:07:35.629 13409.674 - 13510.498: 73.2256% ( 157) 00:07:35.629 13510.498 - 13611.323: 74.5939% ( 155) 00:07:35.629 13611.323 - 13712.148: 75.6532% ( 120) 00:07:35.629 13712.148 - 13812.972: 76.7655% ( 126) 00:07:35.629 13812.972 - 13913.797: 77.5777% ( 92) 00:07:35.629 13913.797 - 14014.622: 78.4251% ( 96) 00:07:35.629 14014.622 - 14115.446: 79.4227% ( 113) 00:07:35.629 14115.446 - 14216.271: 80.4379% ( 115) 00:07:35.629 14216.271 - 14317.095: 81.2500% ( 92) 00:07:35.629 14317.095 - 14417.920: 82.0710% ( 93) 00:07:35.629 14417.920 - 14518.745: 82.8125% ( 84) 00:07:35.629 14518.745 - 14619.569: 83.5275% ( 81) 00:07:35.629 14619.569 - 14720.394: 84.2779% ( 85) 00:07:35.629 14720.394 - 14821.218: 84.7722% ( 56) 00:07:35.629 14821.218 - 14922.043: 85.2401% ( 53) 00:07:35.629 14922.043 - 15022.868: 85.5932% ( 40) 00:07:35.629 15022.868 - 15123.692: 86.0964% ( 57) 00:07:35.629 15123.692 - 15224.517: 86.5466% ( 51) 00:07:35.629 15224.517 - 15325.342: 86.9880% ( 50) 00:07:35.629 15325.342 - 15426.166: 87.6677% ( 77) 00:07:35.629 15426.166 - 15526.991: 88.1974% ( 60) 00:07:35.629 15526.991 - 15627.815: 88.7888% ( 67) 00:07:35.629 15627.815 - 15728.640: 89.2126% ( 48) 00:07:35.629 15728.640 - 15829.465: 89.6981% ( 55) 00:07:35.629 15829.465 - 15930.289: 90.1306% ( 49) 00:07:35.629 15930.289 - 16031.114: 90.5456% ( 47) 00:07:35.629 16031.114 - 16131.938: 90.9428% ( 45) 00:07:35.629 16131.938 - 16232.763: 91.3400% ( 45) 00:07:35.629 16232.763 - 16333.588: 91.7726% ( 49) 00:07:35.629 16333.588 - 16434.412: 92.1610% ( 44) 00:07:35.629 16434.412 - 16535.237: 92.5141% ( 40) 00:07:35.629 16535.237 - 16636.062: 92.9643% ( 51) 00:07:35.629 16636.062 - 16736.886: 93.5117% ( 62) 00:07:35.629 16736.886 - 16837.711: 93.8736% ( 41) 00:07:35.629 16837.711 - 16938.535: 94.2002% ( 37) 00:07:35.629 16938.535 - 17039.360: 94.4739% ( 31) 00:07:35.629 17039.360 - 17140.185: 94.7387% ( 30) 00:07:35.629 17140.185 - 17241.009: 95.0035% ( 30) 00:07:35.629 17241.009 - 17341.834: 95.4008% ( 45) 00:07:35.629 17341.834 - 17442.658: 95.6833% ( 32) 00:07:35.629 17442.658 - 17543.483: 95.8333% ( 17) 00:07:35.629 17543.483 - 17644.308: 95.9216% ( 10) 00:07:35.629 17644.308 - 17745.132: 95.9834% ( 7) 00:07:35.629 17745.132 - 17845.957: 96.1864% ( 23) 00:07:35.629 17845.957 - 17946.782: 96.5572% ( 42) 00:07:35.629 17946.782 - 18047.606: 96.8838% ( 37) 00:07:35.629 18047.606 - 18148.431: 97.3076% ( 48) 00:07:35.629 18148.431 - 18249.255: 97.5724% ( 30) 00:07:35.629 18249.255 - 18350.080: 97.7578% ( 21) 00:07:35.629 18350.080 - 18450.905: 97.9343% ( 20) 00:07:35.629 18450.905 - 18551.729: 98.1020% ( 19) 00:07:35.629 18551.729 - 18652.554: 98.1903% ( 10) 00:07:35.629 18652.554 - 18753.378: 98.2433% ( 6) 00:07:35.629 18753.378 - 18854.203: 98.2963% ( 6) 00:07:35.629 18854.203 - 18955.028: 98.3757% ( 9) 00:07:35.629 18955.028 - 19055.852: 98.4640% ( 10) 00:07:35.629 19055.852 - 19156.677: 98.5523% ( 10) 00:07:35.629 19156.677 - 19257.502: 98.5964% ( 5) 00:07:35.629 19257.502 - 19358.326: 98.6405% ( 5) 00:07:35.629 19358.326 - 19459.151: 98.6847% ( 5) 00:07:35.629 19459.151 - 19559.975: 98.7288% ( 5) 00:07:35.629 19559.975 - 19660.800: 98.7818% ( 6) 00:07:35.629 19660.800 - 19761.625: 98.8347% ( 6) 00:07:35.629 19761.625 - 19862.449: 98.8701% ( 4) 00:07:35.629 20366.572 - 20467.397: 98.8789% ( 1) 00:07:35.629 20467.397 - 20568.222: 98.9142% ( 4) 00:07:35.629 20568.222 - 20669.046: 98.9495% ( 4) 00:07:35.629 20669.046 - 20769.871: 98.9848% ( 4) 00:07:35.629 20769.871 - 20870.695: 99.0201% ( 4) 00:07:35.629 20870.695 - 20971.520: 99.0643% ( 5) 00:07:35.629 20971.520 - 21072.345: 99.0996% ( 4) 00:07:35.890 21072.345 - 21173.169: 99.1349% ( 4) 00:07:35.890 21173.169 - 21273.994: 99.1702% ( 4) 00:07:35.890 21273.994 - 21374.818: 99.2055% ( 4) 00:07:35.890 21374.818 - 21475.643: 99.2408% ( 4) 00:07:35.890 21475.643 - 21576.468: 99.2761% ( 4) 00:07:35.890 21576.468 - 21677.292: 99.3114% ( 4) 00:07:35.890 21677.292 - 21778.117: 99.3379% ( 3) 00:07:35.890 21778.117 - 21878.942: 99.3732% ( 4) 00:07:35.890 21878.942 - 21979.766: 99.4085% ( 4) 00:07:35.890 21979.766 - 22080.591: 99.4350% ( 3) 00:07:35.890 25508.628 - 25609.452: 99.4615% ( 3) 00:07:35.890 25609.452 - 25710.277: 99.4968% ( 4) 00:07:35.890 25710.277 - 25811.102: 99.5321% ( 4) 00:07:35.890 25811.102 - 26012.751: 99.5939% ( 7) 00:07:35.890 26012.751 - 26214.400: 99.6645% ( 8) 00:07:35.890 26214.400 - 26416.049: 99.7352% ( 8) 00:07:35.890 26416.049 - 26617.698: 99.8058% ( 8) 00:07:35.890 26617.698 - 26819.348: 99.8764% ( 8) 00:07:35.890 26819.348 - 27020.997: 99.9559% ( 9) 00:07:35.890 27020.997 - 27222.646: 100.0000% ( 5) 00:07:35.890 00:07:35.890 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.890 ============================================================================== 00:07:35.890 Range in us Cumulative IO count 00:07:35.890 6074.683 - 6099.889: 0.0088% ( 1) 00:07:35.890 6125.095 - 6150.302: 0.0177% ( 1) 00:07:35.890 6150.302 - 6175.508: 0.0265% ( 1) 00:07:35.890 6225.920 - 6251.126: 0.0618% ( 4) 00:07:35.890 6251.126 - 6276.332: 0.1059% ( 5) 00:07:35.890 6276.332 - 6301.538: 0.2648% ( 18) 00:07:35.890 6301.538 - 6326.745: 0.4061% ( 16) 00:07:35.890 6326.745 - 6351.951: 0.5650% ( 18) 00:07:35.890 6351.951 - 6377.157: 0.8210% ( 29) 00:07:35.890 6377.157 - 6402.363: 1.3506% ( 60) 00:07:35.890 6402.363 - 6427.569: 1.8097% ( 52) 00:07:35.890 6427.569 - 6452.775: 2.1010% ( 33) 00:07:35.890 6452.775 - 6503.188: 2.8425% ( 84) 00:07:35.890 6503.188 - 6553.600: 3.6017% ( 86) 00:07:35.890 6553.600 - 6604.012: 4.3167% ( 81) 00:07:35.890 6604.012 - 6654.425: 5.5526% ( 140) 00:07:35.890 6654.425 - 6704.837: 6.3383% ( 89) 00:07:35.890 6704.837 - 6755.249: 7.7066% ( 155) 00:07:35.890 6755.249 - 6805.662: 9.0484% ( 152) 00:07:35.890 6805.662 - 6856.074: 9.9223% ( 99) 00:07:35.890 6856.074 - 6906.486: 10.8404% ( 104) 00:07:35.890 6906.486 - 6956.898: 11.9527% ( 126) 00:07:35.890 6956.898 - 7007.311: 12.8884% ( 106) 00:07:35.890 7007.311 - 7057.723: 13.4710% ( 66) 00:07:35.890 7057.723 - 7108.135: 14.0448% ( 65) 00:07:35.890 7108.135 - 7158.548: 15.3602% ( 149) 00:07:35.890 7158.548 - 7208.960: 15.9869% ( 71) 00:07:35.890 7208.960 - 7259.372: 16.6667% ( 77) 00:07:35.890 7259.372 - 7309.785: 17.1434% ( 54) 00:07:35.890 7309.785 - 7360.197: 17.8937% ( 85) 00:07:35.890 7360.197 - 7410.609: 18.7412% ( 96) 00:07:35.890 7410.609 - 7461.022: 19.6681% ( 105) 00:07:35.890 7461.022 - 7511.434: 20.3655% ( 79) 00:07:35.890 7511.434 - 7561.846: 21.0011% ( 72) 00:07:35.890 7561.846 - 7612.258: 21.6896% ( 78) 00:07:35.890 7612.258 - 7662.671: 22.4929% ( 91) 00:07:35.890 7662.671 - 7713.083: 23.2609% ( 87) 00:07:35.890 7713.083 - 7763.495: 24.4703% ( 137) 00:07:35.890 7763.495 - 7813.908: 25.7327% ( 143) 00:07:35.890 7813.908 - 7864.320: 26.7479% ( 115) 00:07:35.890 7864.320 - 7914.732: 27.4276% ( 77) 00:07:35.890 7914.732 - 7965.145: 27.8249% ( 45) 00:07:35.890 7965.145 - 8015.557: 28.1868% ( 41) 00:07:35.890 8015.557 - 8065.969: 28.5840% ( 45) 00:07:35.890 8065.969 - 8116.382: 28.9548% ( 42) 00:07:35.890 8116.382 - 8166.794: 29.1843% ( 26) 00:07:35.890 8166.794 - 8217.206: 29.3874% ( 23) 00:07:35.890 8217.206 - 8267.618: 29.6434% ( 29) 00:07:35.890 8267.618 - 8318.031: 29.9965% ( 40) 00:07:35.890 8318.031 - 8368.443: 30.1730% ( 20) 00:07:35.890 8368.443 - 8418.855: 30.3231% ( 17) 00:07:35.890 8418.855 - 8469.268: 30.4820% ( 18) 00:07:35.890 8469.268 - 8519.680: 30.6497% ( 19) 00:07:35.890 8519.680 - 8570.092: 30.8528% ( 23) 00:07:35.890 8570.092 - 8620.505: 30.9410% ( 10) 00:07:35.890 8620.505 - 8670.917: 31.0823% ( 16) 00:07:35.890 8670.917 - 8721.329: 31.2588% ( 20) 00:07:35.890 8721.329 - 8771.742: 31.5325% ( 31) 00:07:35.890 8771.742 - 8822.154: 31.7620% ( 26) 00:07:35.890 8822.154 - 8872.566: 32.0268% ( 30) 00:07:35.890 8872.566 - 8922.978: 32.8655% ( 95) 00:07:35.890 8922.978 - 8973.391: 33.1480% ( 32) 00:07:35.890 8973.391 - 9023.803: 33.5187% ( 42) 00:07:35.890 9023.803 - 9074.215: 33.9954% ( 54) 00:07:35.890 9074.215 - 9124.628: 34.4368% ( 50) 00:07:35.890 9124.628 - 9175.040: 34.7193% ( 32) 00:07:35.890 9175.040 - 9225.452: 35.0724% ( 40) 00:07:35.890 9225.452 - 9275.865: 35.3725% ( 34) 00:07:35.890 9275.865 - 9326.277: 35.6903% ( 36) 00:07:35.890 9326.277 - 9376.689: 36.1229% ( 49) 00:07:35.890 9376.689 - 9427.102: 36.4848% ( 41) 00:07:35.890 9427.102 - 9477.514: 36.8291% ( 39) 00:07:35.890 9477.514 - 9527.926: 37.4206% ( 67) 00:07:35.890 9527.926 - 9578.338: 37.7030% ( 32) 00:07:35.890 9578.338 - 9628.751: 37.9855% ( 32) 00:07:35.890 9628.751 - 9679.163: 38.4622% ( 54) 00:07:35.890 9679.163 - 9729.575: 38.9389% ( 54) 00:07:35.890 9729.575 - 9779.988: 39.1949% ( 29) 00:07:35.890 9779.988 - 9830.400: 39.4862% ( 33) 00:07:35.890 9830.400 - 9880.812: 39.8570% ( 42) 00:07:35.890 9880.812 - 9931.225: 40.2454% ( 44) 00:07:35.890 9931.225 - 9981.637: 40.5014% ( 29) 00:07:35.890 9981.637 - 10032.049: 40.7662% ( 30) 00:07:35.890 10032.049 - 10082.462: 40.9781% ( 24) 00:07:35.890 10082.462 - 10132.874: 41.2429% ( 30) 00:07:35.890 10132.874 - 10183.286: 41.4195% ( 20) 00:07:35.890 10183.286 - 10233.698: 41.6225% ( 23) 00:07:35.890 10233.698 - 10284.111: 41.8520% ( 26) 00:07:35.890 10284.111 - 10334.523: 42.0727% ( 25) 00:07:35.890 10334.523 - 10384.935: 42.3376% ( 30) 00:07:35.890 10384.935 - 10435.348: 42.6730% ( 38) 00:07:35.890 10435.348 - 10485.760: 42.9379% ( 30) 00:07:35.890 10485.760 - 10536.172: 43.2733% ( 38) 00:07:35.890 10536.172 - 10586.585: 43.6794% ( 46) 00:07:35.890 10586.585 - 10636.997: 44.1472% ( 53) 00:07:35.890 10636.997 - 10687.409: 44.5975% ( 51) 00:07:35.890 10687.409 - 10737.822: 45.1095% ( 58) 00:07:35.890 10737.822 - 10788.234: 45.7362% ( 71) 00:07:35.890 10788.234 - 10838.646: 46.3012% ( 64) 00:07:35.890 10838.646 - 10889.058: 46.6631% ( 41) 00:07:35.890 10889.058 - 10939.471: 47.0604% ( 45) 00:07:35.890 10939.471 - 10989.883: 47.3958% ( 38) 00:07:35.890 10989.883 - 11040.295: 47.7578% ( 41) 00:07:35.890 11040.295 - 11090.708: 48.2080% ( 51) 00:07:35.890 11090.708 - 11141.120: 48.5081% ( 34) 00:07:35.890 11141.120 - 11191.532: 48.9407% ( 49) 00:07:35.890 11191.532 - 11241.945: 49.3468% ( 46) 00:07:35.890 11241.945 - 11292.357: 49.9647% ( 70) 00:07:35.890 11292.357 - 11342.769: 50.5032% ( 61) 00:07:35.890 11342.769 - 11393.182: 51.0064% ( 57) 00:07:35.890 11393.182 - 11443.594: 51.5978% ( 67) 00:07:35.890 11443.594 - 11494.006: 52.0833% ( 55) 00:07:35.890 11494.006 - 11544.418: 52.5247% ( 50) 00:07:35.890 11544.418 - 11594.831: 52.9484% ( 48) 00:07:35.890 11594.831 - 11645.243: 53.5487% ( 68) 00:07:35.890 11645.243 - 11695.655: 54.0078% ( 52) 00:07:35.890 11695.655 - 11746.068: 54.4756% ( 53) 00:07:35.890 11746.068 - 11796.480: 54.9700% ( 56) 00:07:35.890 11796.480 - 11846.892: 55.5350% ( 64) 00:07:35.890 11846.892 - 11897.305: 56.0823% ( 62) 00:07:35.890 11897.305 - 11947.717: 56.7355% ( 74) 00:07:35.890 11947.717 - 11998.129: 57.2034% ( 53) 00:07:35.890 11998.129 - 12048.542: 57.6977% ( 56) 00:07:35.890 12048.542 - 12098.954: 58.0332% ( 38) 00:07:35.890 12098.954 - 12149.366: 58.5275% ( 56) 00:07:35.890 12149.366 - 12199.778: 58.9689% ( 50) 00:07:35.890 12199.778 - 12250.191: 59.3485% ( 43) 00:07:35.890 12250.191 - 12300.603: 59.6928% ( 39) 00:07:35.890 12300.603 - 12351.015: 60.0194% ( 37) 00:07:35.890 12351.015 - 12401.428: 60.3107% ( 33) 00:07:35.890 12401.428 - 12451.840: 60.6638% ( 40) 00:07:35.890 12451.840 - 12502.252: 61.1317% ( 53) 00:07:35.890 12502.252 - 12552.665: 61.5201% ( 44) 00:07:35.890 12552.665 - 12603.077: 61.8732% ( 40) 00:07:35.890 12603.077 - 12653.489: 62.2087% ( 38) 00:07:35.890 12653.489 - 12703.902: 62.5971% ( 44) 00:07:35.890 12703.902 - 12754.314: 63.0650% ( 53) 00:07:35.890 12754.314 - 12804.726: 63.5417% ( 54) 00:07:35.890 12804.726 - 12855.138: 64.1684% ( 71) 00:07:35.890 12855.138 - 12905.551: 64.7511% ( 66) 00:07:35.890 12905.551 - 13006.375: 66.0752% ( 150) 00:07:35.890 13006.375 - 13107.200: 67.6819% ( 182) 00:07:35.890 13107.200 - 13208.025: 69.4739% ( 203) 00:07:35.890 13208.025 - 13308.849: 70.8510% ( 156) 00:07:35.890 13308.849 - 13409.674: 72.4311% ( 179) 00:07:35.890 13409.674 - 13510.498: 73.9054% ( 167) 00:07:35.890 13510.498 - 13611.323: 75.1942% ( 146) 00:07:35.890 13611.323 - 13712.148: 76.6066% ( 160) 00:07:35.890 13712.148 - 13812.972: 77.7454% ( 129) 00:07:35.890 13812.972 - 13913.797: 78.6194% ( 99) 00:07:35.891 13913.797 - 14014.622: 79.3256% ( 80) 00:07:35.891 14014.622 - 14115.446: 79.9347% ( 69) 00:07:35.891 14115.446 - 14216.271: 80.3496% ( 47) 00:07:35.891 14216.271 - 14317.095: 80.8616% ( 58) 00:07:35.891 14317.095 - 14417.920: 81.3648% ( 57) 00:07:35.891 14417.920 - 14518.745: 82.1946% ( 94) 00:07:35.891 14518.745 - 14619.569: 82.8302% ( 72) 00:07:35.891 14619.569 - 14720.394: 83.6600% ( 94) 00:07:35.891 14720.394 - 14821.218: 84.1367% ( 54) 00:07:35.891 14821.218 - 14922.043: 84.6751% ( 61) 00:07:35.891 14922.043 - 15022.868: 85.0459% ( 42) 00:07:35.891 15022.868 - 15123.692: 85.3549% ( 35) 00:07:35.891 15123.692 - 15224.517: 85.9022% ( 62) 00:07:35.891 15224.517 - 15325.342: 86.5643% ( 75) 00:07:35.891 15325.342 - 15426.166: 87.3941% ( 94) 00:07:35.891 15426.166 - 15526.991: 88.0826% ( 78) 00:07:35.891 15526.991 - 15627.815: 88.7006% ( 70) 00:07:35.891 15627.815 - 15728.640: 89.3008% ( 68) 00:07:35.891 15728.640 - 15829.465: 89.9718% ( 76) 00:07:35.891 15829.465 - 15930.289: 90.7486% ( 88) 00:07:35.891 15930.289 - 16031.114: 91.3930% ( 73) 00:07:35.891 16031.114 - 16131.938: 92.0198% ( 71) 00:07:35.891 16131.938 - 16232.763: 92.6024% ( 66) 00:07:35.891 16232.763 - 16333.588: 92.9820% ( 43) 00:07:35.891 16333.588 - 16434.412: 93.2468% ( 30) 00:07:35.891 16434.412 - 16535.237: 93.4499% ( 23) 00:07:35.891 16535.237 - 16636.062: 93.5734% ( 14) 00:07:35.891 16636.062 - 16736.886: 93.6882% ( 13) 00:07:35.891 16736.886 - 16837.711: 93.7853% ( 11) 00:07:35.891 16837.711 - 16938.535: 93.9089% ( 14) 00:07:35.891 16938.535 - 17039.360: 94.1031% ( 22) 00:07:35.891 17039.360 - 17140.185: 94.2885% ( 21) 00:07:35.891 17140.185 - 17241.009: 94.4915% ( 23) 00:07:35.891 17241.009 - 17341.834: 94.6681% ( 20) 00:07:35.891 17341.834 - 17442.658: 95.0035% ( 38) 00:07:35.891 17442.658 - 17543.483: 95.2684% ( 30) 00:07:35.891 17543.483 - 17644.308: 95.5685% ( 34) 00:07:35.891 17644.308 - 17745.132: 95.7892% ( 25) 00:07:35.891 17745.132 - 17845.957: 95.9834% ( 22) 00:07:35.891 17845.957 - 17946.782: 96.0982% ( 13) 00:07:35.891 17946.782 - 18047.606: 96.2394% ( 16) 00:07:35.891 18047.606 - 18148.431: 96.4160% ( 20) 00:07:35.891 18148.431 - 18249.255: 96.6013% ( 21) 00:07:35.891 18249.255 - 18350.080: 96.8220% ( 25) 00:07:35.891 18350.080 - 18450.905: 97.1045% ( 32) 00:07:35.891 18450.905 - 18551.729: 97.4311% ( 37) 00:07:35.891 18551.729 - 18652.554: 97.5900% ( 18) 00:07:35.891 18652.554 - 18753.378: 97.7666% ( 20) 00:07:35.891 18753.378 - 18854.203: 97.9873% ( 25) 00:07:35.891 18854.203 - 18955.028: 98.1197% ( 15) 00:07:35.891 18955.028 - 19055.852: 98.2963% ( 20) 00:07:35.891 19055.852 - 19156.677: 98.4022% ( 12) 00:07:35.891 19156.677 - 19257.502: 98.5169% ( 13) 00:07:35.891 19257.502 - 19358.326: 98.6405% ( 14) 00:07:35.891 19358.326 - 19459.151: 98.7730% ( 15) 00:07:35.891 19459.151 - 19559.975: 98.8612% ( 10) 00:07:35.891 19559.975 - 19660.800: 98.9407% ( 9) 00:07:35.891 19660.800 - 19761.625: 99.0201% ( 9) 00:07:35.891 19761.625 - 19862.449: 99.1084% ( 10) 00:07:35.891 19862.449 - 19963.274: 99.1879% ( 9) 00:07:35.891 19963.274 - 20064.098: 99.2673% ( 9) 00:07:35.891 20064.098 - 20164.923: 99.3203% ( 6) 00:07:35.891 20164.923 - 20265.748: 99.3556% ( 4) 00:07:35.891 20265.748 - 20366.572: 99.3909% ( 4) 00:07:35.891 20366.572 - 20467.397: 99.4262% ( 4) 00:07:35.891 20467.397 - 20568.222: 99.4350% ( 1) 00:07:35.891 23693.785 - 23794.609: 99.4527% ( 2) 00:07:35.891 23794.609 - 23895.434: 99.4792% ( 3) 00:07:35.891 23895.434 - 23996.258: 99.5233% ( 5) 00:07:35.891 23996.258 - 24097.083: 99.5586% ( 4) 00:07:35.891 24097.083 - 24197.908: 99.5851% ( 3) 00:07:35.891 24197.908 - 24298.732: 99.6204% ( 4) 00:07:35.891 24298.732 - 24399.557: 99.6557% ( 4) 00:07:35.891 24399.557 - 24500.382: 99.6910% ( 4) 00:07:35.891 24500.382 - 24601.206: 99.7263% ( 4) 00:07:35.891 24601.206 - 24702.031: 99.7617% ( 4) 00:07:35.891 24702.031 - 24802.855: 99.7970% ( 4) 00:07:35.891 24802.855 - 24903.680: 99.8323% ( 4) 00:07:35.891 24903.680 - 25004.505: 99.8676% ( 4) 00:07:35.891 25004.505 - 25105.329: 99.9029% ( 4) 00:07:35.891 25105.329 - 25206.154: 99.9382% ( 4) 00:07:35.891 25206.154 - 25306.978: 99.9735% ( 4) 00:07:35.891 25306.978 - 25407.803: 100.0000% ( 3) 00:07:35.891 00:07:35.891 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.891 ============================================================================== 00:07:35.891 Range in us Cumulative IO count 00:07:35.891 6099.889 - 6125.095: 0.0088% ( 1) 00:07:35.891 6150.302 - 6175.508: 0.0263% ( 2) 00:07:35.891 6200.714 - 6225.920: 0.0527% ( 3) 00:07:35.891 6225.920 - 6251.126: 0.0614% ( 1) 00:07:35.891 6251.126 - 6276.332: 0.1141% ( 6) 00:07:35.891 6276.332 - 6301.538: 0.1843% ( 8) 00:07:35.891 6301.538 - 6326.745: 0.2195% ( 4) 00:07:35.891 6326.745 - 6351.951: 0.3336% ( 13) 00:07:35.891 6351.951 - 6377.157: 0.6759% ( 39) 00:07:35.891 6377.157 - 6402.363: 1.1060% ( 49) 00:07:35.891 6402.363 - 6427.569: 1.5713% ( 53) 00:07:35.891 6427.569 - 6452.775: 1.9400% ( 42) 00:07:35.891 6452.775 - 6503.188: 2.6949% ( 86) 00:07:35.891 6503.188 - 6553.600: 3.7921% ( 125) 00:07:35.891 6553.600 - 6604.012: 4.6173% ( 94) 00:07:35.891 6604.012 - 6654.425: 5.6619% ( 119) 00:07:35.891 6654.425 - 6704.837: 6.5660% ( 103) 00:07:35.891 6704.837 - 6755.249: 7.7072% ( 130) 00:07:35.891 6755.249 - 6805.662: 8.6376% ( 106) 00:07:35.891 6805.662 - 6856.074: 9.7086% ( 122) 00:07:35.891 6856.074 - 6906.486: 10.8761% ( 133) 00:07:35.891 6906.486 - 6956.898: 11.3940% ( 59) 00:07:35.891 6956.898 - 7007.311: 12.0084% ( 70) 00:07:35.891 7007.311 - 7057.723: 12.7809% ( 88) 00:07:35.891 7057.723 - 7108.135: 13.5358% ( 86) 00:07:35.891 7108.135 - 7158.548: 14.4400% ( 103) 00:07:35.891 7158.548 - 7208.960: 14.9666% ( 60) 00:07:35.891 7208.960 - 7259.372: 15.6689% ( 80) 00:07:35.891 7259.372 - 7309.785: 16.2658% ( 68) 00:07:35.891 7309.785 - 7360.197: 17.0822% ( 93) 00:07:35.891 7360.197 - 7410.609: 17.8020% ( 82) 00:07:35.891 7410.609 - 7461.022: 18.9958% ( 136) 00:07:35.891 7461.022 - 7511.434: 20.1369% ( 130) 00:07:35.891 7511.434 - 7561.846: 20.8567% ( 82) 00:07:35.891 7561.846 - 7612.258: 21.8662% ( 115) 00:07:35.891 7612.258 - 7662.671: 23.2883% ( 162) 00:07:35.891 7662.671 - 7713.083: 24.8244% ( 175) 00:07:35.891 7713.083 - 7763.495: 25.8603% ( 118) 00:07:35.891 7763.495 - 7813.908: 26.6591% ( 91) 00:07:35.891 7813.908 - 7864.320: 27.2735% ( 70) 00:07:35.891 7864.320 - 7914.732: 27.9407% ( 76) 00:07:35.891 7914.732 - 7965.145: 28.6780% ( 84) 00:07:35.891 7965.145 - 8015.557: 29.2574% ( 66) 00:07:35.891 8015.557 - 8065.969: 29.8192% ( 64) 00:07:35.891 8065.969 - 8116.382: 30.0035% ( 21) 00:07:35.891 8116.382 - 8166.794: 30.1879% ( 21) 00:07:35.891 8166.794 - 8217.206: 30.3810% ( 22) 00:07:35.891 8217.206 - 8267.618: 30.5565% ( 20) 00:07:35.891 8267.618 - 8318.031: 30.7058% ( 17) 00:07:35.891 8318.031 - 8368.443: 30.8989% ( 22) 00:07:35.891 8368.443 - 8418.855: 31.2324% ( 38) 00:07:35.891 8418.855 - 8469.268: 31.4695% ( 27) 00:07:35.891 8469.268 - 8519.680: 31.5748% ( 12) 00:07:35.891 8519.680 - 8570.092: 31.7416% ( 19) 00:07:35.891 8570.092 - 8620.505: 31.9084% ( 19) 00:07:35.891 8620.505 - 8670.917: 31.9698% ( 7) 00:07:35.891 8670.917 - 8721.329: 32.0751% ( 12) 00:07:35.891 8721.329 - 8771.742: 32.1717% ( 11) 00:07:35.891 8771.742 - 8822.154: 32.2946% ( 14) 00:07:35.891 8822.154 - 8872.566: 32.4789% ( 21) 00:07:35.891 8872.566 - 8922.978: 33.1373% ( 75) 00:07:35.891 8922.978 - 8973.391: 33.4709% ( 38) 00:07:35.891 8973.391 - 9023.803: 33.6903% ( 25) 00:07:35.891 9023.803 - 9074.215: 33.9712% ( 32) 00:07:35.891 9074.215 - 9124.628: 34.2346% ( 30) 00:07:35.891 9124.628 - 9175.040: 34.5681% ( 38) 00:07:35.891 9175.040 - 9225.452: 34.8578% ( 33) 00:07:35.891 9225.452 - 9275.865: 35.0597% ( 23) 00:07:35.891 9275.865 - 9326.277: 35.2528% ( 22) 00:07:35.891 9326.277 - 9376.689: 35.4810% ( 26) 00:07:35.891 9376.689 - 9427.102: 35.8146% ( 38) 00:07:35.891 9427.102 - 9477.514: 36.0604% ( 28) 00:07:35.891 9477.514 - 9527.926: 36.3413% ( 32) 00:07:35.891 9527.926 - 9578.338: 36.8065% ( 53) 00:07:35.891 9578.338 - 9628.751: 37.4034% ( 68) 00:07:35.891 9628.751 - 9679.163: 37.8950% ( 56) 00:07:35.891 9679.163 - 9729.575: 38.3427% ( 51) 00:07:35.891 9729.575 - 9779.988: 38.7992% ( 52) 00:07:35.891 9779.988 - 9830.400: 39.2381% ( 50) 00:07:35.891 9830.400 - 9880.812: 39.5980% ( 41) 00:07:35.891 9880.812 - 9931.225: 40.0456% ( 51) 00:07:35.891 9931.225 - 9981.637: 40.4143% ( 42) 00:07:35.891 9981.637 - 10032.049: 40.8796% ( 53) 00:07:35.891 10032.049 - 10082.462: 41.1605% ( 32) 00:07:35.891 10082.462 - 10132.874: 41.4677% ( 35) 00:07:35.891 10132.874 - 10183.286: 41.9242% ( 52) 00:07:35.891 10183.286 - 10233.698: 42.3367% ( 47) 00:07:35.891 10233.698 - 10284.111: 42.6176% ( 32) 00:07:35.891 10284.111 - 10334.523: 42.9249% ( 35) 00:07:35.891 10334.523 - 10384.935: 43.2496% ( 37) 00:07:35.891 10384.935 - 10435.348: 43.5920% ( 39) 00:07:35.891 10435.348 - 10485.760: 44.1889% ( 68) 00:07:35.891 10485.760 - 10536.172: 44.6015% ( 47) 00:07:35.891 10536.172 - 10586.585: 44.9877% ( 44) 00:07:35.891 10586.585 - 10636.997: 45.2862% ( 34) 00:07:35.891 10636.997 - 10687.409: 45.5583% ( 31) 00:07:35.891 10687.409 - 10737.822: 45.8216% ( 30) 00:07:35.891 10737.822 - 10788.234: 46.0499% ( 26) 00:07:35.891 10788.234 - 10838.646: 46.3395% ( 33) 00:07:35.892 10838.646 - 10889.058: 46.7697% ( 49) 00:07:35.892 10889.058 - 10939.471: 47.2700% ( 57) 00:07:35.892 10939.471 - 10989.883: 47.9635% ( 79) 00:07:35.892 10989.883 - 11040.295: 48.4375% ( 54) 00:07:35.892 11040.295 - 11090.708: 49.1924% ( 86) 00:07:35.892 11090.708 - 11141.120: 49.7893% ( 68) 00:07:35.892 11141.120 - 11191.532: 50.3423% ( 63) 00:07:35.892 11191.532 - 11241.945: 50.9656% ( 71) 00:07:35.892 11241.945 - 11292.357: 51.4045% ( 50) 00:07:35.892 11292.357 - 11342.769: 51.8873% ( 55) 00:07:35.892 11342.769 - 11393.182: 52.1682% ( 32) 00:07:35.892 11393.182 - 11443.594: 52.4403% ( 31) 00:07:35.892 11443.594 - 11494.006: 52.7124% ( 31) 00:07:35.892 11494.006 - 11544.418: 52.9670% ( 29) 00:07:35.892 11544.418 - 11594.831: 53.2567% ( 33) 00:07:35.892 11594.831 - 11645.243: 53.5112% ( 29) 00:07:35.892 11645.243 - 11695.655: 53.7570% ( 28) 00:07:35.892 11695.655 - 11746.068: 53.9677% ( 24) 00:07:35.892 11746.068 - 11796.480: 54.1871% ( 25) 00:07:35.892 11796.480 - 11846.892: 54.5383% ( 40) 00:07:35.892 11846.892 - 11897.305: 54.9596% ( 48) 00:07:35.892 11897.305 - 11947.717: 55.2932% ( 38) 00:07:35.892 11947.717 - 11998.129: 55.5390% ( 28) 00:07:35.892 11998.129 - 12048.542: 55.8725% ( 38) 00:07:35.892 12048.542 - 12098.954: 56.2500% ( 43) 00:07:35.892 12098.954 - 12149.366: 56.8206% ( 65) 00:07:35.892 12149.366 - 12199.778: 57.4087% ( 67) 00:07:35.892 12199.778 - 12250.191: 58.0758% ( 76) 00:07:35.892 12250.191 - 12300.603: 58.8044% ( 83) 00:07:35.892 12300.603 - 12351.015: 59.4628% ( 75) 00:07:35.892 12351.015 - 12401.428: 60.0509% ( 67) 00:07:35.892 12401.428 - 12451.840: 60.7795% ( 83) 00:07:35.892 12451.840 - 12502.252: 61.4027% ( 71) 00:07:35.892 12502.252 - 12552.665: 62.1225% ( 82) 00:07:35.892 12552.665 - 12603.077: 62.7107% ( 67) 00:07:35.892 12603.077 - 12653.489: 63.3954% ( 78) 00:07:35.892 12653.489 - 12703.902: 64.1327% ( 84) 00:07:35.892 12703.902 - 12754.314: 64.8262% ( 79) 00:07:35.892 12754.314 - 12804.726: 65.4582% ( 72) 00:07:35.892 12804.726 - 12855.138: 66.0376% ( 66) 00:07:35.892 12855.138 - 12905.551: 66.6345% ( 68) 00:07:35.892 12905.551 - 13006.375: 67.8985% ( 144) 00:07:35.892 13006.375 - 13107.200: 69.2416% ( 153) 00:07:35.892 13107.200 - 13208.025: 70.2949% ( 120) 00:07:35.892 13208.025 - 13308.849: 71.2781% ( 112) 00:07:35.892 13308.849 - 13409.674: 72.4982% ( 139) 00:07:35.892 13409.674 - 13510.498: 73.4989% ( 114) 00:07:35.892 13510.498 - 13611.323: 74.5348% ( 118) 00:07:35.892 13611.323 - 13712.148: 75.4565% ( 105) 00:07:35.892 13712.148 - 13812.972: 76.7556% ( 148) 00:07:35.892 13812.972 - 13913.797: 77.9231% ( 133) 00:07:35.892 13913.797 - 14014.622: 78.9940% ( 122) 00:07:35.892 14014.622 - 14115.446: 79.9508% ( 109) 00:07:35.892 14115.446 - 14216.271: 80.7145% ( 87) 00:07:35.892 14216.271 - 14317.095: 81.4607% ( 85) 00:07:35.892 14317.095 - 14417.920: 82.1805% ( 82) 00:07:35.892 14417.920 - 14518.745: 82.7686% ( 67) 00:07:35.892 14518.745 - 14619.569: 83.3567% ( 67) 00:07:35.892 14619.569 - 14720.394: 84.1731% ( 93) 00:07:35.892 14720.394 - 14821.218: 84.9105% ( 84) 00:07:35.892 14821.218 - 14922.043: 85.7619% ( 97) 00:07:35.892 14922.043 - 15022.868: 86.3413% ( 66) 00:07:35.892 15022.868 - 15123.692: 86.8504% ( 58) 00:07:35.892 15123.692 - 15224.517: 87.5439% ( 79) 00:07:35.892 15224.517 - 15325.342: 88.1408% ( 68) 00:07:35.892 15325.342 - 15426.166: 88.8079% ( 76) 00:07:35.892 15426.166 - 15526.991: 89.6506% ( 96) 00:07:35.892 15526.991 - 15627.815: 90.2475% ( 68) 00:07:35.892 15627.815 - 15728.640: 90.8006% ( 63) 00:07:35.892 15728.640 - 15829.465: 91.2746% ( 54) 00:07:35.892 15829.465 - 15930.289: 91.6871% ( 47) 00:07:35.892 15930.289 - 16031.114: 92.0471% ( 41) 00:07:35.892 16031.114 - 16131.938: 92.3631% ( 36) 00:07:35.892 16131.938 - 16232.763: 92.6527% ( 33) 00:07:35.892 16232.763 - 16333.588: 93.1180% ( 53) 00:07:35.892 16333.588 - 16434.412: 93.6183% ( 57) 00:07:35.892 16434.412 - 16535.237: 94.0836% ( 53) 00:07:35.892 16535.237 - 16636.062: 94.4522% ( 42) 00:07:35.892 16636.062 - 16736.886: 94.6717% ( 25) 00:07:35.892 16736.886 - 16837.711: 94.8385% ( 19) 00:07:35.892 16837.711 - 16938.535: 94.9702% ( 15) 00:07:35.892 16938.535 - 17039.360: 95.0755% ( 12) 00:07:35.892 17039.360 - 17140.185: 95.1633% ( 10) 00:07:35.892 17140.185 - 17241.009: 95.2686% ( 12) 00:07:35.892 17241.009 - 17341.834: 95.3652% ( 11) 00:07:35.892 17341.834 - 17442.658: 95.4442% ( 9) 00:07:35.892 17442.658 - 17543.483: 95.4968% ( 6) 00:07:35.892 17543.483 - 17644.308: 95.5495% ( 6) 00:07:35.892 17644.308 - 17745.132: 95.7251% ( 20) 00:07:35.892 17745.132 - 17845.957: 95.8129% ( 10) 00:07:35.892 17845.957 - 17946.782: 96.0147% ( 23) 00:07:35.892 17946.782 - 18047.606: 96.2166% ( 23) 00:07:35.892 18047.606 - 18148.431: 96.4273% ( 24) 00:07:35.892 18148.431 - 18249.255: 96.7258% ( 34) 00:07:35.892 18249.255 - 18350.080: 97.0330% ( 35) 00:07:35.892 18350.080 - 18450.905: 97.4017% ( 42) 00:07:35.892 18450.905 - 18551.729: 97.8143% ( 47) 00:07:35.892 18551.729 - 18652.554: 98.1303% ( 36) 00:07:35.892 18652.554 - 18753.378: 98.4287% ( 34) 00:07:35.892 18753.378 - 18854.203: 98.6306% ( 23) 00:07:35.892 18854.203 - 18955.028: 98.8501% ( 25) 00:07:35.892 18955.028 - 19055.852: 98.9730% ( 14) 00:07:35.892 19055.852 - 19156.677: 99.0432% ( 8) 00:07:35.892 19156.677 - 19257.502: 99.1573% ( 13) 00:07:35.892 19257.502 - 19358.326: 99.3065% ( 17) 00:07:35.892 19358.326 - 19459.151: 99.4733% ( 19) 00:07:35.892 19459.151 - 19559.975: 99.5435% ( 8) 00:07:35.892 19559.975 - 19660.800: 99.6225% ( 9) 00:07:35.892 19660.800 - 19761.625: 99.6928% ( 8) 00:07:35.892 19761.625 - 19862.449: 99.7805% ( 10) 00:07:35.892 19862.449 - 19963.274: 99.8683% ( 10) 00:07:35.892 19963.274 - 20064.098: 99.9386% ( 8) 00:07:35.892 20064.098 - 20164.923: 99.9912% ( 6) 00:07:35.892 20164.923 - 20265.748: 100.0000% ( 1) 00:07:35.892 00:07:35.892 20:34:52 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:35.892 00:07:35.892 real 0m2.545s 00:07:35.892 user 0m2.226s 00:07:35.892 sys 0m0.211s 00:07:35.892 20:34:52 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.892 ************************************ 00:07:35.892 20:34:52 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 END TEST nvme_perf 00:07:35.892 ************************************ 00:07:35.892 20:34:52 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:35.892 20:34:52 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:35.892 20:34:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.892 20:34:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 ************************************ 00:07:35.892 START TEST nvme_hello_world 00:07:35.892 ************************************ 00:07:35.892 20:34:52 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:36.150 Initializing NVMe Controllers 00:07:36.150 Attached to 0000:00:10.0 00:07:36.150 Namespace ID: 1 size: 6GB 00:07:36.150 Attached to 0000:00:11.0 00:07:36.150 Namespace ID: 1 size: 5GB 00:07:36.150 Attached to 0000:00:13.0 00:07:36.150 Namespace ID: 1 size: 1GB 00:07:36.150 Attached to 0000:00:12.0 00:07:36.150 Namespace ID: 1 size: 4GB 00:07:36.150 Namespace ID: 2 size: 4GB 00:07:36.150 Namespace ID: 3 size: 4GB 00:07:36.150 Initialization complete. 00:07:36.150 INFO: using host memory buffer for IO 00:07:36.150 Hello world! 00:07:36.150 INFO: using host memory buffer for IO 00:07:36.150 Hello world! 00:07:36.150 INFO: using host memory buffer for IO 00:07:36.150 Hello world! 00:07:36.150 INFO: using host memory buffer for IO 00:07:36.150 Hello world! 00:07:36.150 INFO: using host memory buffer for IO 00:07:36.150 Hello world! 00:07:36.150 INFO: using host memory buffer for IO 00:07:36.150 Hello world! 00:07:36.150 00:07:36.150 real 0m0.215s 00:07:36.150 user 0m0.074s 00:07:36.150 sys 0m0.096s 00:07:36.150 20:34:53 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.150 ************************************ 00:07:36.150 END TEST nvme_hello_world 00:07:36.150 ************************************ 00:07:36.150 20:34:53 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:36.150 20:34:53 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.150 20:34:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.150 20:34:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.150 20:34:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.150 ************************************ 00:07:36.150 START TEST nvme_sgl 00:07:36.150 ************************************ 00:07:36.150 20:34:53 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.408 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:36.409 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:36.409 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:36.409 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:36.409 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:36.409 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:36.409 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:36.409 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:36.409 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:36.409 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:36.409 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:36.409 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:36.409 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:36.409 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:36.409 NVMe Readv/Writev Request test 00:07:36.409 Attached to 0000:00:10.0 00:07:36.409 Attached to 0000:00:11.0 00:07:36.409 Attached to 0000:00:13.0 00:07:36.409 Attached to 0000:00:12.0 00:07:36.409 0000:00:10.0: build_io_request_2 test passed 00:07:36.409 0000:00:10.0: build_io_request_4 test passed 00:07:36.409 0000:00:10.0: build_io_request_5 test passed 00:07:36.409 0000:00:10.0: build_io_request_6 test passed 00:07:36.409 0000:00:10.0: build_io_request_7 test passed 00:07:36.409 0000:00:10.0: build_io_request_10 test passed 00:07:36.409 0000:00:11.0: build_io_request_2 test passed 00:07:36.409 0000:00:11.0: build_io_request_4 test passed 00:07:36.409 0000:00:11.0: build_io_request_5 test passed 00:07:36.409 0000:00:11.0: build_io_request_6 test passed 00:07:36.409 0000:00:11.0: build_io_request_7 test passed 00:07:36.409 0000:00:11.0: build_io_request_10 test passed 00:07:36.409 Cleaning up... 00:07:36.409 00:07:36.409 real 0m0.313s 00:07:36.409 user 0m0.150s 00:07:36.409 sys 0m0.118s 00:07:36.409 20:34:53 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.409 ************************************ 00:07:36.409 END TEST nvme_sgl 00:07:36.409 20:34:53 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:36.409 ************************************ 00:07:36.409 20:34:53 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.409 20:34:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.409 20:34:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.409 20:34:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.409 ************************************ 00:07:36.409 START TEST nvme_e2edp 00:07:36.409 ************************************ 00:07:36.409 20:34:53 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.669 NVMe Write/Read with End-to-End data protection test 00:07:36.669 Attached to 0000:00:10.0 00:07:36.669 Attached to 0000:00:11.0 00:07:36.669 Attached to 0000:00:13.0 00:07:36.669 Attached to 0000:00:12.0 00:07:36.669 Cleaning up... 00:07:36.669 00:07:36.669 real 0m0.203s 00:07:36.669 user 0m0.074s 00:07:36.669 sys 0m0.083s 00:07:36.669 20:34:53 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.669 20:34:53 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:36.669 ************************************ 00:07:36.669 END TEST nvme_e2edp 00:07:36.669 ************************************ 00:07:36.669 20:34:53 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.669 20:34:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.669 20:34:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.669 20:34:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.669 ************************************ 00:07:36.669 START TEST nvme_reserve 00:07:36.669 ************************************ 00:07:36.669 20:34:53 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.929 ===================================================== 00:07:36.929 NVMe Controller at PCI bus 0, device 16, function 0 00:07:36.929 ===================================================== 00:07:36.929 Reservations: Not Supported 00:07:36.929 ===================================================== 00:07:36.929 NVMe Controller at PCI bus 0, device 17, function 0 00:07:36.929 ===================================================== 00:07:36.929 Reservations: Not Supported 00:07:36.929 ===================================================== 00:07:36.929 NVMe Controller at PCI bus 0, device 19, function 0 00:07:36.929 ===================================================== 00:07:36.929 Reservations: Not Supported 00:07:36.929 ===================================================== 00:07:36.929 NVMe Controller at PCI bus 0, device 18, function 0 00:07:36.929 ===================================================== 00:07:36.929 Reservations: Not Supported 00:07:36.929 Reservation test passed 00:07:36.929 00:07:36.929 real 0m0.206s 00:07:36.929 user 0m0.067s 00:07:36.929 sys 0m0.094s 00:07:36.929 20:34:53 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.929 20:34:53 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 ************************************ 00:07:36.929 END TEST nvme_reserve 00:07:36.929 ************************************ 00:07:36.929 20:34:53 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.929 20:34:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.929 20:34:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.929 20:34:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 ************************************ 00:07:36.929 START TEST nvme_err_injection 00:07:36.929 ************************************ 00:07:36.929 20:34:53 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:37.191 NVMe Error Injection test 00:07:37.191 Attached to 0000:00:10.0 00:07:37.191 Attached to 0000:00:11.0 00:07:37.191 Attached to 0000:00:13.0 00:07:37.191 Attached to 0000:00:12.0 00:07:37.191 0000:00:11.0: get features failed as expected 00:07:37.191 0000:00:13.0: get features failed as expected 00:07:37.191 0000:00:12.0: get features failed as expected 00:07:37.191 0000:00:10.0: get features failed as expected 00:07:37.191 0000:00:10.0: get features successfully as expected 00:07:37.191 0000:00:11.0: get features successfully as expected 00:07:37.191 0000:00:13.0: get features successfully as expected 00:07:37.191 0000:00:12.0: get features successfully as expected 00:07:37.191 0000:00:10.0: read failed as expected 00:07:37.191 0000:00:11.0: read failed as expected 00:07:37.191 0000:00:13.0: read failed as expected 00:07:37.191 0000:00:12.0: read failed as expected 00:07:37.191 0000:00:10.0: read successfully as expected 00:07:37.191 0000:00:11.0: read successfully as expected 00:07:37.191 0000:00:13.0: read successfully as expected 00:07:37.191 0000:00:12.0: read successfully as expected 00:07:37.191 Cleaning up... 00:07:37.191 00:07:37.191 real 0m0.234s 00:07:37.191 user 0m0.079s 00:07:37.191 sys 0m0.110s 00:07:37.191 20:34:54 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.191 20:34:54 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:37.191 ************************************ 00:07:37.191 END TEST nvme_err_injection 00:07:37.191 ************************************ 00:07:37.191 20:34:54 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:37.191 20:34:54 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:37.191 20:34:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.191 20:34:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.191 ************************************ 00:07:37.191 START TEST nvme_overhead 00:07:37.191 ************************************ 00:07:37.191 20:34:54 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:38.576 Initializing NVMe Controllers 00:07:38.576 Attached to 0000:00:10.0 00:07:38.576 Attached to 0000:00:11.0 00:07:38.576 Attached to 0000:00:13.0 00:07:38.576 Attached to 0000:00:12.0 00:07:38.576 Initialization complete. Launching workers. 00:07:38.576 submit (in ns) avg, min, max = 11639.0, 10149.2, 67563.1 00:07:38.576 complete (in ns) avg, min, max = 7655.5, 7161.5, 296293.1 00:07:38.576 00:07:38.576 Submit histogram 00:07:38.576 ================ 00:07:38.576 Range in us Cumulative Count 00:07:38.576 10.142 - 10.191: 0.0149% ( 1) 00:07:38.576 10.683 - 10.732: 0.0596% ( 3) 00:07:38.576 10.732 - 10.782: 0.1192% ( 4) 00:07:38.576 10.782 - 10.831: 0.4022% ( 19) 00:07:38.576 10.831 - 10.880: 0.9832% ( 39) 00:07:38.576 10.880 - 10.929: 1.7578% ( 52) 00:07:38.576 10.929 - 10.978: 2.6367% ( 59) 00:07:38.576 10.978 - 11.028: 3.7688% ( 76) 00:07:38.576 11.028 - 11.077: 5.7500% ( 133) 00:07:38.576 11.077 - 11.126: 9.3401% ( 241) 00:07:38.576 11.126 - 11.175: 14.9263% ( 375) 00:07:38.576 11.175 - 11.225: 22.1957% ( 488) 00:07:38.576 11.225 - 11.274: 31.7444% ( 641) 00:07:38.576 11.274 - 11.323: 42.0676% ( 693) 00:07:38.576 11.323 - 11.372: 52.3164% ( 688) 00:07:38.576 11.372 - 11.422: 61.0904% ( 589) 00:07:38.576 11.422 - 11.471: 68.8068% ( 518) 00:07:38.576 11.471 - 11.520: 74.0801% ( 354) 00:07:38.576 11.520 - 11.569: 78.1767% ( 275) 00:07:38.576 11.569 - 11.618: 81.0517% ( 193) 00:07:38.576 11.618 - 11.668: 83.2862% ( 150) 00:07:38.576 11.668 - 11.717: 84.9844% ( 114) 00:07:38.576 11.717 - 11.766: 86.6379% ( 111) 00:07:38.576 11.766 - 11.815: 87.7402% ( 74) 00:07:38.576 11.815 - 11.865: 88.6936% ( 64) 00:07:38.576 11.865 - 11.914: 89.6321% ( 63) 00:07:38.576 11.914 - 11.963: 90.3471% ( 48) 00:07:38.576 11.963 - 12.012: 90.9429% ( 40) 00:07:38.576 12.012 - 12.062: 91.4494% ( 34) 00:07:38.576 12.062 - 12.111: 92.0751% ( 42) 00:07:38.576 12.111 - 12.160: 92.6709% ( 40) 00:07:38.576 12.160 - 12.209: 93.2370% ( 38) 00:07:38.576 12.209 - 12.258: 93.6243% ( 26) 00:07:38.576 12.258 - 12.308: 93.9669% ( 23) 00:07:38.576 12.308 - 12.357: 94.1457% ( 12) 00:07:38.576 12.357 - 12.406: 94.4883% ( 23) 00:07:38.576 12.406 - 12.455: 94.6671% ( 12) 00:07:38.576 12.455 - 12.505: 94.7564% ( 6) 00:07:38.576 12.505 - 12.554: 94.8160% ( 4) 00:07:38.576 12.554 - 12.603: 94.8905% ( 5) 00:07:38.576 12.603 - 12.702: 94.9799% ( 6) 00:07:38.576 12.702 - 12.800: 95.0544% ( 5) 00:07:38.576 12.800 - 12.898: 95.1140% ( 4) 00:07:38.576 12.898 - 12.997: 95.1438% ( 2) 00:07:38.576 12.997 - 13.095: 95.2182% ( 5) 00:07:38.576 13.095 - 13.194: 95.2629% ( 3) 00:07:38.576 13.194 - 13.292: 95.3374% ( 5) 00:07:38.576 13.292 - 13.391: 95.4119% ( 5) 00:07:38.576 13.391 - 13.489: 95.5162% ( 7) 00:07:38.576 13.489 - 13.588: 95.6800% ( 11) 00:07:38.576 13.588 - 13.686: 95.7992% ( 8) 00:07:38.576 13.686 - 13.785: 95.9928% ( 13) 00:07:38.576 13.785 - 13.883: 96.2461% ( 17) 00:07:38.576 13.883 - 13.982: 96.4100% ( 11) 00:07:38.576 13.982 - 14.080: 96.5440% ( 9) 00:07:38.576 14.080 - 14.178: 96.7824% ( 16) 00:07:38.576 14.178 - 14.277: 96.8717% ( 6) 00:07:38.576 14.277 - 14.375: 96.9909% ( 8) 00:07:38.576 14.375 - 14.474: 97.0505% ( 4) 00:07:38.576 14.474 - 14.572: 97.0803% ( 2) 00:07:38.576 14.572 - 14.671: 97.1995% ( 8) 00:07:38.576 14.671 - 14.769: 97.2293% ( 2) 00:07:38.576 14.769 - 14.868: 97.2888% ( 4) 00:07:38.576 14.966 - 15.065: 97.3037% ( 1) 00:07:38.576 15.065 - 15.163: 97.3633% ( 4) 00:07:38.576 15.163 - 15.262: 97.4080% ( 3) 00:07:38.576 15.262 - 15.360: 97.4229% ( 1) 00:07:38.576 15.360 - 15.458: 97.4676% ( 3) 00:07:38.576 15.458 - 15.557: 97.5123% ( 3) 00:07:38.576 15.655 - 15.754: 97.5868% ( 5) 00:07:38.576 15.754 - 15.852: 97.6464% ( 4) 00:07:38.576 15.852 - 15.951: 97.6910% ( 3) 00:07:38.576 15.951 - 16.049: 97.7655% ( 5) 00:07:38.576 16.148 - 16.246: 97.7804% ( 1) 00:07:38.576 16.246 - 16.345: 97.8400% ( 4) 00:07:38.576 16.345 - 16.443: 97.8996% ( 4) 00:07:38.576 16.443 - 16.542: 97.9741% ( 5) 00:07:38.576 16.542 - 16.640: 98.0337% ( 4) 00:07:38.577 16.640 - 16.738: 98.1826% ( 10) 00:07:38.577 16.738 - 16.837: 98.2869% ( 7) 00:07:38.577 16.837 - 16.935: 98.4061% ( 8) 00:07:38.577 16.935 - 17.034: 98.5848% ( 12) 00:07:38.577 17.034 - 17.132: 98.7338% ( 10) 00:07:38.577 17.132 - 17.231: 98.8232% ( 6) 00:07:38.577 17.231 - 17.329: 98.9275% ( 7) 00:07:38.577 17.329 - 17.428: 99.0317% ( 7) 00:07:38.577 17.428 - 17.526: 99.1062% ( 5) 00:07:38.577 17.526 - 17.625: 99.2552% ( 10) 00:07:38.577 17.625 - 17.723: 99.2999% ( 3) 00:07:38.577 17.723 - 17.822: 99.3595% ( 4) 00:07:38.577 17.822 - 17.920: 99.4339% ( 5) 00:07:38.577 17.920 - 18.018: 99.4786% ( 3) 00:07:38.577 18.018 - 18.117: 99.5084% ( 2) 00:07:38.577 18.117 - 18.215: 99.5382% ( 2) 00:07:38.577 18.215 - 18.314: 99.5680% ( 2) 00:07:38.577 18.412 - 18.511: 99.5829% ( 1) 00:07:38.577 18.511 - 18.609: 99.5978% ( 1) 00:07:38.577 18.708 - 18.806: 99.6127% ( 1) 00:07:38.577 18.905 - 19.003: 99.6425% ( 2) 00:07:38.577 19.102 - 19.200: 99.6574% ( 1) 00:07:38.577 19.889 - 19.988: 99.6723% ( 1) 00:07:38.577 19.988 - 20.086: 99.7021% ( 2) 00:07:38.577 20.086 - 20.185: 99.7319% ( 2) 00:07:38.577 20.283 - 20.382: 99.7468% ( 1) 00:07:38.577 20.874 - 20.972: 99.7617% ( 1) 00:07:38.577 20.972 - 21.071: 99.7914% ( 2) 00:07:38.577 21.169 - 21.268: 99.8063% ( 1) 00:07:38.577 21.858 - 21.957: 99.8212% ( 1) 00:07:38.577 22.942 - 23.040: 99.8361% ( 1) 00:07:38.577 24.222 - 24.320: 99.8510% ( 1) 00:07:38.577 24.320 - 24.418: 99.8659% ( 1) 00:07:38.577 24.517 - 24.615: 99.8808% ( 1) 00:07:38.577 25.206 - 25.403: 99.8957% ( 1) 00:07:38.577 26.782 - 26.978: 99.9106% ( 1) 00:07:38.577 26.978 - 27.175: 99.9255% ( 1) 00:07:38.577 31.114 - 31.311: 99.9404% ( 1) 00:07:38.577 34.462 - 34.658: 99.9553% ( 1) 00:07:38.577 53.169 - 53.563: 99.9702% ( 1) 00:07:38.577 63.015 - 63.409: 99.9851% ( 1) 00:07:38.577 67.348 - 67.742: 100.0000% ( 1) 00:07:38.577 00:07:38.577 Complete histogram 00:07:38.577 ================== 00:07:38.577 Range in us Cumulative Count 00:07:38.577 7.138 - 7.188: 0.0149% ( 1) 00:07:38.577 7.188 - 7.237: 0.0745% ( 4) 00:07:38.577 7.237 - 7.286: 0.6405% ( 38) 00:07:38.577 7.286 - 7.335: 5.0797% ( 298) 00:07:38.577 7.335 - 7.385: 18.5163% ( 902) 00:07:38.577 7.385 - 7.434: 37.0773% ( 1246) 00:07:38.577 7.434 - 7.483: 54.2529% ( 1153) 00:07:38.577 7.483 - 7.532: 66.9894% ( 855) 00:07:38.577 7.532 - 7.582: 77.2084% ( 686) 00:07:38.577 7.582 - 7.631: 84.4034% ( 483) 00:07:38.577 7.631 - 7.680: 89.0958% ( 315) 00:07:38.577 7.680 - 7.729: 91.6729% ( 173) 00:07:38.577 7.729 - 7.778: 93.2072% ( 103) 00:07:38.577 7.778 - 7.828: 94.3095% ( 74) 00:07:38.577 7.828 - 7.877: 95.0395% ( 49) 00:07:38.577 7.877 - 7.926: 95.5906% ( 37) 00:07:38.577 7.926 - 7.975: 96.0375% ( 30) 00:07:38.577 7.975 - 8.025: 96.3206% ( 19) 00:07:38.577 8.025 - 8.074: 96.6483% ( 22) 00:07:38.577 8.074 - 8.123: 96.8122% ( 11) 00:07:38.577 8.123 - 8.172: 97.0207% ( 14) 00:07:38.577 8.172 - 8.222: 97.1101% ( 6) 00:07:38.577 8.222 - 8.271: 97.1697% ( 4) 00:07:38.577 8.271 - 8.320: 97.2293% ( 4) 00:07:38.577 8.320 - 8.369: 97.4527% ( 15) 00:07:38.577 8.369 - 8.418: 97.5421% ( 6) 00:07:38.577 8.418 - 8.468: 97.7208% ( 12) 00:07:38.577 8.468 - 8.517: 97.8102% ( 6) 00:07:38.577 8.517 - 8.566: 97.8996% ( 6) 00:07:38.577 8.566 - 8.615: 97.9741% ( 5) 00:07:38.577 8.615 - 8.665: 98.0486% ( 5) 00:07:38.577 8.665 - 8.714: 98.0784% ( 2) 00:07:38.577 8.714 - 8.763: 98.0933% ( 1) 00:07:38.577 8.763 - 8.812: 98.1081% ( 1) 00:07:38.577 8.812 - 8.862: 98.1230% ( 1) 00:07:38.577 8.862 - 8.911: 98.1379% ( 1) 00:07:38.577 8.960 - 9.009: 98.1528% ( 1) 00:07:38.577 9.009 - 9.058: 98.1677% ( 1) 00:07:38.577 9.058 - 9.108: 98.1975% ( 2) 00:07:38.577 9.108 - 9.157: 98.2124% ( 1) 00:07:38.577 9.403 - 9.452: 98.2422% ( 2) 00:07:38.577 10.092 - 10.142: 98.2720% ( 2) 00:07:38.577 10.142 - 10.191: 98.2869% ( 1) 00:07:38.577 10.191 - 10.240: 98.3167% ( 2) 00:07:38.577 10.338 - 10.388: 98.3316% ( 1) 00:07:38.577 10.437 - 10.486: 98.3465% ( 1) 00:07:38.577 10.486 - 10.535: 98.3763% ( 2) 00:07:38.577 10.585 - 10.634: 98.3912% ( 1) 00:07:38.577 10.634 - 10.683: 98.4210% ( 2) 00:07:38.577 11.668 - 11.717: 98.4359% ( 1) 00:07:38.577 11.865 - 11.914: 98.4657% ( 2) 00:07:38.577 11.914 - 11.963: 98.4806% ( 1) 00:07:38.577 12.160 - 12.209: 98.4955% ( 1) 00:07:38.577 12.357 - 12.406: 98.5104% ( 1) 00:07:38.577 12.406 - 12.455: 98.5252% ( 1) 00:07:38.577 12.455 - 12.505: 98.5401% ( 1) 00:07:38.577 12.603 - 12.702: 98.5550% ( 1) 00:07:38.577 12.702 - 12.800: 98.5848% ( 2) 00:07:38.577 12.800 - 12.898: 98.6444% ( 4) 00:07:38.577 12.898 - 12.997: 98.7785% ( 9) 00:07:38.577 12.997 - 13.095: 98.8977% ( 8) 00:07:38.577 13.095 - 13.194: 99.0168% ( 8) 00:07:38.577 13.194 - 13.292: 99.1211% ( 7) 00:07:38.577 13.292 - 13.391: 99.1807% ( 4) 00:07:38.577 13.391 - 13.489: 99.2552% ( 5) 00:07:38.577 13.489 - 13.588: 99.2999% ( 3) 00:07:38.577 13.588 - 13.686: 99.3446% ( 3) 00:07:38.577 13.686 - 13.785: 99.4041% ( 4) 00:07:38.577 13.785 - 13.883: 99.4786% ( 5) 00:07:38.577 13.883 - 13.982: 99.5084% ( 2) 00:07:38.577 13.982 - 14.080: 99.5382% ( 2) 00:07:38.577 14.080 - 14.178: 99.5680% ( 2) 00:07:38.577 14.178 - 14.277: 99.6127% ( 3) 00:07:38.577 14.277 - 14.375: 99.6276% ( 1) 00:07:38.577 14.375 - 14.474: 99.6574% ( 2) 00:07:38.577 14.474 - 14.572: 99.6872% ( 2) 00:07:38.577 14.572 - 14.671: 99.7170% ( 2) 00:07:38.577 15.065 - 15.163: 99.7319% ( 1) 00:07:38.577 15.262 - 15.360: 99.7468% ( 1) 00:07:38.577 15.557 - 15.655: 99.7766% ( 2) 00:07:38.577 15.655 - 15.754: 99.8063% ( 2) 00:07:38.577 15.754 - 15.852: 99.8212% ( 1) 00:07:38.577 15.852 - 15.951: 99.8510% ( 2) 00:07:38.577 16.148 - 16.246: 99.8957% ( 3) 00:07:38.577 18.314 - 18.412: 99.9106% ( 1) 00:07:38.577 18.412 - 18.511: 99.9404% ( 2) 00:07:38.577 18.905 - 19.003: 99.9553% ( 1) 00:07:38.577 19.495 - 19.594: 99.9702% ( 1) 00:07:38.577 28.160 - 28.357: 99.9851% ( 1) 00:07:38.577 296.172 - 297.748: 100.0000% ( 1) 00:07:38.577 00:07:38.577 00:07:38.577 real 0m1.218s 00:07:38.577 user 0m1.065s 00:07:38.577 sys 0m0.102s 00:07:38.577 20:34:55 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.577 20:34:55 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:38.577 ************************************ 00:07:38.577 END TEST nvme_overhead 00:07:38.577 ************************************ 00:07:38.577 20:34:55 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:38.577 20:34:55 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:38.577 20:34:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.577 20:34:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.577 ************************************ 00:07:38.577 START TEST nvme_arbitration 00:07:38.577 ************************************ 00:07:38.577 20:34:55 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:41.869 Initializing NVMe Controllers 00:07:41.869 Attached to 0000:00:10.0 00:07:41.869 Attached to 0000:00:11.0 00:07:41.869 Attached to 0000:00:13.0 00:07:41.869 Attached to 0000:00:12.0 00:07:41.869 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:41.869 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:41.869 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:41.869 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:41.869 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:41.869 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:41.869 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:41.869 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:41.869 Initialization complete. Launching workers. 00:07:41.869 Starting thread on core 1 with urgent priority queue 00:07:41.869 Starting thread on core 2 with urgent priority queue 00:07:41.869 Starting thread on core 3 with urgent priority queue 00:07:41.869 Starting thread on core 0 with urgent priority queue 00:07:41.869 QEMU NVMe Ctrl (12340 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:41.869 QEMU NVMe Ctrl (12342 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:41.869 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:41.869 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:41.869 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:41.869 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:07:41.869 ======================================================== 00:07:41.869 00:07:41.869 00:07:41.869 real 0m3.311s 00:07:41.869 user 0m9.254s 00:07:41.869 sys 0m0.112s 00:07:41.869 20:34:58 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.869 20:34:58 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:41.869 ************************************ 00:07:41.869 END TEST nvme_arbitration 00:07:41.869 ************************************ 00:07:41.869 20:34:58 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.869 20:34:58 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.869 20:34:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.869 20:34:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.869 ************************************ 00:07:41.869 START TEST nvme_single_aen 00:07:41.869 ************************************ 00:07:41.869 20:34:58 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:42.131 Asynchronous Event Request test 00:07:42.131 Attached to 0000:00:10.0 00:07:42.131 Attached to 0000:00:11.0 00:07:42.131 Attached to 0000:00:13.0 00:07:42.131 Attached to 0000:00:12.0 00:07:42.131 Reset controller to setup AER completions for this process 00:07:42.131 Registering asynchronous event callbacks... 00:07:42.131 Getting orig temperature thresholds of all controllers 00:07:42.131 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.131 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.131 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.131 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.131 Setting all controllers temperature threshold low to trigger AER 00:07:42.131 Waiting for all controllers temperature threshold to be set lower 00:07:42.131 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.132 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:42.132 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.132 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:42.132 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.132 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:42.132 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.132 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:42.132 Waiting for all controllers to trigger AER and reset threshold 00:07:42.132 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.132 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.132 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.132 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.132 Cleaning up... 00:07:42.132 00:07:42.132 real 0m0.224s 00:07:42.132 user 0m0.079s 00:07:42.132 sys 0m0.100s 00:07:42.132 20:34:59 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.132 20:34:59 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 ************************************ 00:07:42.132 END TEST nvme_single_aen 00:07:42.132 ************************************ 00:07:42.132 20:34:59 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:42.132 20:34:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.132 20:34:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.132 20:34:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 ************************************ 00:07:42.132 START TEST nvme_doorbell_aers 00:07:42.132 ************************************ 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:42.132 20:34:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:42.392 [2024-12-06 20:34:59.330126] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:07:52.388 Executing: test_write_invalid_db 00:07:52.388 Waiting for AER completion... 00:07:52.388 Failure: test_write_invalid_db 00:07:52.388 00:07:52.388 Executing: test_invalid_db_write_overflow_sq 00:07:52.388 Waiting for AER completion... 00:07:52.388 Failure: test_invalid_db_write_overflow_sq 00:07:52.388 00:07:52.388 Executing: test_invalid_db_write_overflow_cq 00:07:52.388 Waiting for AER completion... 00:07:52.388 Failure: test_invalid_db_write_overflow_cq 00:07:52.388 00:07:52.388 20:35:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:52.388 20:35:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:52.388 [2024-12-06 20:35:09.367381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:02.388 Executing: test_write_invalid_db 00:08:02.388 Waiting for AER completion... 00:08:02.388 Failure: test_write_invalid_db 00:08:02.388 00:08:02.388 Executing: test_invalid_db_write_overflow_sq 00:08:02.388 Waiting for AER completion... 00:08:02.388 Failure: test_invalid_db_write_overflow_sq 00:08:02.388 00:08:02.388 Executing: test_invalid_db_write_overflow_cq 00:08:02.388 Waiting for AER completion... 00:08:02.388 Failure: test_invalid_db_write_overflow_cq 00:08:02.388 00:08:02.388 20:35:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:02.388 20:35:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:02.388 [2024-12-06 20:35:19.379529] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:12.385 Executing: test_write_invalid_db 00:08:12.385 Waiting for AER completion... 00:08:12.385 Failure: test_write_invalid_db 00:08:12.385 00:08:12.385 Executing: test_invalid_db_write_overflow_sq 00:08:12.385 Waiting for AER completion... 00:08:12.385 Failure: test_invalid_db_write_overflow_sq 00:08:12.385 00:08:12.385 Executing: test_invalid_db_write_overflow_cq 00:08:12.385 Waiting for AER completion... 00:08:12.385 Failure: test_invalid_db_write_overflow_cq 00:08:12.385 00:08:12.385 20:35:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.385 20:35:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:12.385 [2024-12-06 20:35:29.426187] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 Executing: test_write_invalid_db 00:08:22.485 Waiting for AER completion... 00:08:22.485 Failure: test_write_invalid_db 00:08:22.485 00:08:22.485 Executing: test_invalid_db_write_overflow_sq 00:08:22.485 Waiting for AER completion... 00:08:22.485 Failure: test_invalid_db_write_overflow_sq 00:08:22.485 00:08:22.485 Executing: test_invalid_db_write_overflow_cq 00:08:22.485 Waiting for AER completion... 00:08:22.485 Failure: test_invalid_db_write_overflow_cq 00:08:22.485 00:08:22.485 00:08:22.485 real 0m40.183s 00:08:22.485 user 0m34.113s 00:08:22.485 sys 0m5.701s 00:08:22.485 20:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.485 20:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:22.485 ************************************ 00:08:22.485 END TEST nvme_doorbell_aers 00:08:22.485 ************************************ 00:08:22.485 20:35:39 nvme -- nvme/nvme.sh@97 -- # uname 00:08:22.485 20:35:39 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:22.485 20:35:39 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:22.485 20:35:39 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:22.485 20:35:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.485 20:35:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.485 ************************************ 00:08:22.485 START TEST nvme_multi_aen 00:08:22.485 ************************************ 00:08:22.485 20:35:39 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:22.485 [2024-12-06 20:35:39.475686] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.475760] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.475773] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.477425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.477479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.477491] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.478641] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.478673] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.478683] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.479844] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.479877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 [2024-12-06 20:35:39.479897] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63177) is not found. Dropping the request. 00:08:22.485 Child process pid: 63697 00:08:22.746 [Child] Asynchronous Event Request test 00:08:22.746 [Child] Attached to 0000:00:10.0 00:08:22.746 [Child] Attached to 0000:00:11.0 00:08:22.746 [Child] Attached to 0000:00:13.0 00:08:22.746 [Child] Attached to 0000:00:12.0 00:08:22.746 [Child] Registering asynchronous event callbacks... 00:08:22.746 [Child] Getting orig temperature thresholds of all controllers 00:08:22.746 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:22.746 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.746 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.746 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.746 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.746 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.746 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.746 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.746 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.746 [Child] Cleaning up... 00:08:22.746 Asynchronous Event Request test 00:08:22.746 Attached to 0000:00:10.0 00:08:22.746 Attached to 0000:00:11.0 00:08:22.746 Attached to 0000:00:13.0 00:08:22.746 Attached to 0000:00:12.0 00:08:22.746 Reset controller to setup AER completions for this process 00:08:22.746 Registering asynchronous event callbacks... 00:08:22.746 Getting orig temperature thresholds of all controllers 00:08:22.746 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.746 Setting all controllers temperature threshold low to trigger AER 00:08:22.746 Waiting for all controllers temperature threshold to be set lower 00:08:22.746 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.747 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:22.747 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.747 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:22.747 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.747 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:22.747 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.747 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:22.747 Waiting for all controllers to trigger AER and reset threshold 00:08:22.747 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.747 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.747 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.747 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.747 Cleaning up... 00:08:22.747 00:08:22.747 real 0m0.457s 00:08:22.747 user 0m0.150s 00:08:22.747 sys 0m0.197s 00:08:22.747 20:35:39 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.747 20:35:39 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:22.747 ************************************ 00:08:22.747 END TEST nvme_multi_aen 00:08:22.747 ************************************ 00:08:22.747 20:35:39 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:22.747 20:35:39 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:22.747 20:35:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.747 20:35:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.747 ************************************ 00:08:22.747 START TEST nvme_startup 00:08:22.747 ************************************ 00:08:22.747 20:35:39 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:23.008 Initializing NVMe Controllers 00:08:23.008 Attached to 0000:00:10.0 00:08:23.008 Attached to 0000:00:11.0 00:08:23.008 Attached to 0000:00:13.0 00:08:23.008 Attached to 0000:00:12.0 00:08:23.008 Initialization complete. 00:08:23.008 Time used:157383.594 (us). 00:08:23.008 00:08:23.008 real 0m0.222s 00:08:23.008 user 0m0.071s 00:08:23.008 sys 0m0.101s 00:08:23.008 20:35:40 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.008 20:35:40 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:23.008 ************************************ 00:08:23.008 END TEST nvme_startup 00:08:23.008 ************************************ 00:08:23.008 20:35:40 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:23.008 20:35:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.008 20:35:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.008 20:35:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.008 ************************************ 00:08:23.008 START TEST nvme_multi_secondary 00:08:23.008 ************************************ 00:08:23.008 20:35:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:23.008 20:35:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63748 00:08:23.008 20:35:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63749 00:08:23.008 20:35:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:23.008 20:35:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:23.008 20:35:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:26.302 Initializing NVMe Controllers 00:08:26.302 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.302 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.302 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.302 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.302 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:26.302 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:26.302 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:26.302 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:26.302 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:26.302 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:26.302 Initialization complete. Launching workers. 00:08:26.302 ======================================================== 00:08:26.302 Latency(us) 00:08:26.302 Device Information : IOPS MiB/s Average min max 00:08:26.302 PCIE (0000:00:10.0) NSID 1 from core 1: 2760.90 10.78 5793.21 1296.87 12525.89 00:08:26.302 PCIE (0000:00:11.0) NSID 1 from core 1: 2760.90 10.78 5795.97 1441.29 12483.94 00:08:26.302 PCIE (0000:00:13.0) NSID 1 from core 1: 2760.90 10.78 5795.94 1549.72 12058.32 00:08:26.302 PCIE (0000:00:12.0) NSID 1 from core 1: 2760.90 10.78 5796.75 1405.39 12315.99 00:08:26.302 PCIE (0000:00:12.0) NSID 2 from core 1: 2760.90 10.78 5796.79 1472.15 12233.42 00:08:26.302 PCIE (0000:00:12.0) NSID 3 from core 1: 2760.90 10.78 5797.39 1444.62 12286.42 00:08:26.302 ======================================================== 00:08:26.302 Total : 16565.42 64.71 5796.01 1296.87 12525.89 00:08:26.302 00:08:26.562 Initializing NVMe Controllers 00:08:26.562 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.562 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.562 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.562 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.562 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:26.562 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:26.562 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:26.562 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:26.562 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:26.562 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:26.562 Initialization complete. Launching workers. 00:08:26.562 ======================================================== 00:08:26.562 Latency(us) 00:08:26.562 Device Information : IOPS MiB/s Average min max 00:08:26.562 PCIE (0000:00:10.0) NSID 1 from core 2: 1245.24 4.86 12840.36 1980.95 35832.67 00:08:26.562 PCIE (0000:00:11.0) NSID 1 from core 2: 1245.24 4.86 12831.77 1718.43 38612.90 00:08:26.562 PCIE (0000:00:13.0) NSID 1 from core 2: 1245.24 4.86 12831.42 1530.37 30219.40 00:08:26.562 PCIE (0000:00:12.0) NSID 1 from core 2: 1245.24 4.86 12831.42 1499.10 33697.35 00:08:26.562 PCIE (0000:00:12.0) NSID 2 from core 2: 1245.24 4.86 12830.82 1704.25 38647.65 00:08:26.562 PCIE (0000:00:12.0) NSID 3 from core 2: 1245.24 4.86 12831.86 2054.58 34721.82 00:08:26.562 ======================================================== 00:08:26.562 Total : 7471.42 29.19 12832.94 1499.10 38647.65 00:08:26.562 00:08:26.562 20:35:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63748 00:08:28.478 Initializing NVMe Controllers 00:08:28.478 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.478 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:28.478 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:28.478 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:28.478 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:28.478 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:28.478 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:28.478 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:28.478 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:28.478 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:28.478 Initialization complete. Launching workers. 00:08:28.478 ======================================================== 00:08:28.478 Latency(us) 00:08:28.478 Device Information : IOPS MiB/s Average min max 00:08:28.478 PCIE (0000:00:10.0) NSID 1 from core 0: 4418.97 17.26 3619.03 762.33 13311.52 00:08:28.478 PCIE (0000:00:11.0) NSID 1 from core 0: 4418.97 17.26 3620.45 782.14 13801.32 00:08:28.478 PCIE (0000:00:13.0) NSID 1 from core 0: 4418.97 17.26 3620.40 791.86 13224.76 00:08:28.478 PCIE (0000:00:12.0) NSID 1 from core 0: 4418.97 17.26 3620.37 785.19 12931.06 00:08:28.478 PCIE (0000:00:12.0) NSID 2 from core 0: 4418.97 17.26 3620.33 787.29 12239.37 00:08:28.478 PCIE (0000:00:12.0) NSID 3 from core 0: 4418.97 17.26 3620.29 783.26 12328.20 00:08:28.478 ======================================================== 00:08:28.478 Total : 26513.80 103.57 3620.15 762.33 13801.32 00:08:28.478 00:08:28.478 20:35:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63749 00:08:28.478 20:35:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63818 00:08:28.478 20:35:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:28.478 20:35:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63819 00:08:28.478 20:35:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:28.478 20:35:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:31.778 Initializing NVMe Controllers 00:08:31.778 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.778 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.778 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.778 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.778 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:31.778 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:31.778 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:31.778 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:31.778 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:31.778 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:31.778 Initialization complete. Launching workers. 00:08:31.778 ======================================================== 00:08:31.778 Latency(us) 00:08:31.778 Device Information : IOPS MiB/s Average min max 00:08:31.778 PCIE (0000:00:10.0) NSID 1 from core 1: 2768.93 10.82 5776.35 1672.03 13602.13 00:08:31.778 PCIE (0000:00:11.0) NSID 1 from core 1: 2768.93 10.82 5779.39 1714.18 13077.27 00:08:31.778 PCIE (0000:00:13.0) NSID 1 from core 1: 2768.93 10.82 5779.37 1673.78 12730.44 00:08:31.778 PCIE (0000:00:12.0) NSID 1 from core 1: 2768.93 10.82 5779.94 1489.38 13161.03 00:08:31.778 PCIE (0000:00:12.0) NSID 2 from core 1: 2768.93 10.82 5779.91 1607.62 13447.47 00:08:31.778 PCIE (0000:00:12.0) NSID 3 from core 1: 2774.25 10.84 5772.11 1435.23 13094.46 00:08:31.778 ======================================================== 00:08:31.778 Total : 16618.89 64.92 5777.84 1435.23 13602.13 00:08:31.778 00:08:31.778 Initializing NVMe Controllers 00:08:31.778 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.778 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.778 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.778 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.778 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:31.778 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:31.778 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:31.778 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:31.778 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:31.778 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:31.778 Initialization complete. Launching workers. 00:08:31.778 ======================================================== 00:08:31.778 Latency(us) 00:08:31.778 Device Information : IOPS MiB/s Average min max 00:08:31.778 PCIE (0000:00:10.0) NSID 1 from core 0: 2727.58 10.65 5863.93 1254.80 16646.93 00:08:31.778 PCIE (0000:00:11.0) NSID 1 from core 0: 2727.58 10.65 5865.22 1281.27 16515.97 00:08:31.779 PCIE (0000:00:13.0) NSID 1 from core 0: 2727.58 10.65 5865.71 1236.60 16046.22 00:08:31.779 PCIE (0000:00:12.0) NSID 1 from core 0: 2727.58 10.65 5867.32 1193.71 14739.80 00:08:31.779 PCIE (0000:00:12.0) NSID 2 from core 0: 2727.58 10.65 5868.98 1209.18 13185.11 00:08:31.779 PCIE (0000:00:12.0) NSID 3 from core 0: 2727.58 10.65 5870.09 1150.50 13460.78 00:08:31.779 ======================================================== 00:08:31.779 Total : 16365.46 63.93 5866.88 1150.50 16646.93 00:08:31.779 00:08:33.702 Initializing NVMe Controllers 00:08:33.702 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.702 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.702 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.702 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.702 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:33.702 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:33.702 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:33.702 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:33.702 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:33.702 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:33.702 Initialization complete. Launching workers. 00:08:33.702 ======================================================== 00:08:33.702 Latency(us) 00:08:33.702 Device Information : IOPS MiB/s Average min max 00:08:33.702 PCIE (0000:00:10.0) NSID 1 from core 2: 1541.46 6.02 10376.50 1598.70 37868.87 00:08:33.702 PCIE (0000:00:11.0) NSID 1 from core 2: 1541.46 6.02 10379.71 1652.56 28476.85 00:08:33.702 PCIE (0000:00:13.0) NSID 1 from core 2: 1541.46 6.02 10380.13 1573.84 29480.29 00:08:33.702 PCIE (0000:00:12.0) NSID 1 from core 2: 1541.46 6.02 10379.97 1520.90 29497.16 00:08:33.702 PCIE (0000:00:12.0) NSID 2 from core 2: 1541.46 6.02 10379.82 1585.01 35022.58 00:08:33.702 PCIE (0000:00:12.0) NSID 3 from core 2: 1541.46 6.02 10379.14 1282.12 34833.09 00:08:33.702 ======================================================== 00:08:33.702 Total : 9248.78 36.13 10379.21 1282.12 37868.87 00:08:33.702 00:08:33.702 20:35:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63818 00:08:33.702 20:35:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63819 00:08:33.702 00:08:33.702 real 0m10.671s 00:08:33.702 user 0m18.266s 00:08:33.702 sys 0m0.836s 00:08:33.702 20:35:50 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.702 ************************************ 00:08:33.702 END TEST nvme_multi_secondary 00:08:33.702 ************************************ 00:08:33.703 20:35:50 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:33.703 20:35:50 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:33.703 20:35:50 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:33.703 20:35:50 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62780 ]] 00:08:33.703 20:35:50 nvme -- common/autotest_common.sh@1094 -- # kill 62780 00:08:33.703 20:35:50 nvme -- common/autotest_common.sh@1095 -- # wait 62780 00:08:33.703 [2024-12-06 20:35:50.768962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.769089] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.769137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.769166] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.773541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.773642] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.773669] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.773698] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.777266] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.777323] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.777333] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.777344] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.779109] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.779158] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.779169] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.703 [2024-12-06 20:35:50.779180] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63696) is not found. Dropping the request. 00:08:33.964 20:35:50 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:33.964 20:35:50 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:33.964 20:35:50 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.965 20:35:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.965 20:35:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.965 20:35:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.965 ************************************ 00:08:33.965 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:33.965 ************************************ 00:08:33.965 20:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.965 * Looking for test storage... 00:08:33.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.965 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.226 --rc genhtml_branch_coverage=1 00:08:34.226 --rc genhtml_function_coverage=1 00:08:34.226 --rc genhtml_legend=1 00:08:34.226 --rc geninfo_all_blocks=1 00:08:34.226 --rc geninfo_unexecuted_blocks=1 00:08:34.226 00:08:34.226 ' 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.226 --rc genhtml_branch_coverage=1 00:08:34.226 --rc genhtml_function_coverage=1 00:08:34.226 --rc genhtml_legend=1 00:08:34.226 --rc geninfo_all_blocks=1 00:08:34.226 --rc geninfo_unexecuted_blocks=1 00:08:34.226 00:08:34.226 ' 00:08:34.226 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.226 --rc genhtml_branch_coverage=1 00:08:34.226 --rc genhtml_function_coverage=1 00:08:34.226 --rc genhtml_legend=1 00:08:34.226 --rc geninfo_all_blocks=1 00:08:34.226 --rc geninfo_unexecuted_blocks=1 00:08:34.227 00:08:34.227 ' 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.227 --rc genhtml_branch_coverage=1 00:08:34.227 --rc genhtml_function_coverage=1 00:08:34.227 --rc genhtml_legend=1 00:08:34.227 --rc geninfo_all_blocks=1 00:08:34.227 --rc geninfo_unexecuted_blocks=1 00:08:34.227 00:08:34.227 ' 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63985 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63985 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63985 ']' 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.227 20:35:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.227 [2024-12-06 20:35:51.260783] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:08:34.227 [2024-12-06 20:35:51.260948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:08:34.488 [2024-12-06 20:35:51.437872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.488 [2024-12-06 20:35:51.577456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.488 [2024-12-06 20:35:51.577780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.488 [2024-12-06 20:35:51.578668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.488 [2024-12-06 20:35:51.578771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.434 nvme0n1 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_P6ITu.txt 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.434 true 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733517352 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64008 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:35.434 20:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:37.349 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:37.349 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.349 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.349 [2024-12-06 20:35:54.410632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:37.349 [2024-12-06 20:35:54.411034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:37.349 [2024-12-06 20:35:54.411074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:37.349 [2024-12-06 20:35:54.411091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:37.350 [2024-12-06 20:35:54.413396] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.350 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64008 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64008 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64008 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:37.350 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_P6ITu.txt 00:08:37.609 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:37.609 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:37.609 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_P6ITu.txt 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63985 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63985 ']' 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63985 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63985 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.610 killing process with pid 63985 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63985' 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63985 00:08:37.610 20:35:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63985 00:08:39.524 20:35:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:39.524 20:35:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:39.524 00:08:39.524 real 0m5.216s 00:08:39.524 user 0m18.311s 00:08:39.524 sys 0m0.648s 00:08:39.524 20:35:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.524 20:35:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.524 ************************************ 00:08:39.524 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:39.524 ************************************ 00:08:39.524 20:35:56 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:39.524 20:35:56 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:39.524 20:35:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.524 20:35:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.524 20:35:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.524 ************************************ 00:08:39.524 START TEST nvme_fio 00:08:39.524 ************************************ 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:39.524 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:39.524 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:39.524 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:39.524 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:39.525 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:39.785 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:39.785 20:35:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:39.785 20:35:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:40.046 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:40.046 fio-3.35 00:08:40.046 Starting 1 thread 00:08:46.617 00:08:46.617 test: (groupid=0, jobs=1): err= 0: pid=64149: Fri Dec 6 20:36:03 2024 00:08:46.617 read: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(186MiB/2001msec) 00:08:46.617 slat (nsec): min=4215, max=54958, avg=5044.80, stdev=2147.38 00:08:46.617 clat (usec): min=259, max=10267, avg=2690.16, stdev=831.82 00:08:46.617 lat (usec): min=264, max=10299, avg=2695.20, stdev=833.17 00:08:46.617 clat percentiles (usec): 00:08:46.617 | 1.00th=[ 1942], 5.00th=[ 2089], 10.00th=[ 2180], 20.00th=[ 2278], 00:08:46.617 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:08:46.617 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3261], 95.00th=[ 4686], 00:08:46.617 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7373], 00:08:46.617 | 99.99th=[10028] 00:08:46.617 bw ( KiB/s): min=92664, max=96800, per=99.64%, avg=94621.33, stdev=2076.86, samples=3 00:08:46.617 iops : min=23166, max=24200, avg=23655.33, stdev=519.22, samples=3 00:08:46.617 write: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(184MiB/2001msec); 0 zone resets 00:08:46.617 slat (nsec): min=4310, max=51913, avg=5290.95, stdev=2125.13 00:08:46.617 clat (usec): min=316, max=10110, avg=2699.53, stdev=827.43 00:08:46.617 lat (usec): min=321, max=10115, avg=2704.82, stdev=828.73 00:08:46.617 clat percentiles (usec): 00:08:46.617 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2278], 00:08:46.617 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:08:46.617 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3294], 95.00th=[ 4686], 00:08:46.617 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7111], 99.95th=[ 7439], 00:08:46.617 | 99.99th=[ 9634] 00:08:46.617 bw ( KiB/s): min=92416, max=98080, per=100.00%, avg=94592.00, stdev=3051.43, samples=3 00:08:46.617 iops : min=23104, max=24520, avg=23648.00, stdev=762.86, samples=3 00:08:46.617 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:46.617 lat (msec) : 2=1.94%, 4=91.54%, 10=6.48%, 20=0.01% 00:08:46.617 cpu : usr=99.15%, sys=0.10%, ctx=14, majf=0, minf=608 00:08:46.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:46.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.617 issued rwts: total=47503,47219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.617 00:08:46.617 Run status group 0 (all jobs): 00:08:46.617 READ: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=186MiB (195MB), run=2001-2001msec 00:08:46.617 WRITE: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=184MiB (193MB), run=2001-2001msec 00:08:46.617 ----------------------------------------------------- 00:08:46.617 Suppressions used: 00:08:46.617 count bytes template 00:08:46.617 1 32 /usr/src/fio/parse.c 00:08:46.617 1 8 libtcmalloc_minimal.so 00:08:46.617 ----------------------------------------------------- 00:08:46.617 00:08:46.617 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:46.617 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:46.617 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:46.617 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:46.617 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:46.617 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:46.879 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:46.879 20:36:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:46.879 20:36:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:46.879 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:46.879 fio-3.35 00:08:46.879 Starting 1 thread 00:08:53.441 00:08:53.441 test: (groupid=0, jobs=1): err= 0: pid=64205: Fri Dec 6 20:36:09 2024 00:08:53.441 read: IOPS=22.3k, BW=87.3MiB/s (91.5MB/s)(175MiB/2001msec) 00:08:53.441 slat (nsec): min=4201, max=62013, avg=5168.68, stdev=2301.75 00:08:53.441 clat (usec): min=926, max=7116, avg=2854.28, stdev=814.72 00:08:53.441 lat (usec): min=930, max=7142, avg=2859.45, stdev=816.08 00:08:53.441 clat percentiles (usec): 00:08:53.441 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:08:53.441 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:08:53.441 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 3818], 95.00th=[ 5014], 00:08:53.441 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 6849], 00:08:53.441 | 99.99th=[ 6980] 00:08:53.441 bw ( KiB/s): min=84184, max=90552, per=96.72%, avg=86458.67, stdev=3552.26, samples=3 00:08:53.441 iops : min=21046, max=22638, avg=21614.67, stdev=888.06, samples=3 00:08:53.441 write: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(173MiB/2001msec); 0 zone resets 00:08:53.441 slat (nsec): min=4282, max=72803, avg=5508.99, stdev=2297.47 00:08:53.441 clat (usec): min=950, max=11144, avg=2872.98, stdev=858.50 00:08:53.441 lat (usec): min=955, max=11149, avg=2878.49, stdev=859.75 00:08:53.441 clat percentiles (usec): 00:08:53.441 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:08:53.441 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2638], 00:08:53.441 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 3884], 95.00th=[ 5080], 00:08:53.441 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 7767], 99.95th=[10552], 00:08:53.441 | 99.99th=[10945] 00:08:53.441 bw ( KiB/s): min=84088, max=91224, per=97.60%, avg=86653.33, stdev=3968.20, samples=3 00:08:53.441 iops : min=21022, max=22806, avg=21663.33, stdev=992.05, samples=3 00:08:53.441 lat (usec) : 1000=0.01% 00:08:53.441 lat (msec) : 2=1.07%, 4=89.76%, 10=9.12%, 20=0.04% 00:08:53.441 cpu : usr=99.20%, sys=0.15%, ctx=3, majf=0, minf=607 00:08:53.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:53.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.441 issued rwts: total=44716,44412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.441 00:08:53.441 Run status group 0 (all jobs): 00:08:53.441 READ: bw=87.3MiB/s (91.5MB/s), 87.3MiB/s-87.3MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:08:53.441 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=173MiB (182MB), run=2001-2001msec 00:08:53.441 ----------------------------------------------------- 00:08:53.441 Suppressions used: 00:08:53.441 count bytes template 00:08:53.441 1 32 /usr/src/fio/parse.c 00:08:53.441 1 8 libtcmalloc_minimal.so 00:08:53.441 ----------------------------------------------------- 00:08:53.441 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:53.441 20:36:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:53.441 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:53.699 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:53.699 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:53.699 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:53.700 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:53.700 20:36:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:53.700 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:53.700 fio-3.35 00:08:53.700 Starting 1 thread 00:09:00.343 00:09:00.343 test: (groupid=0, jobs=1): err= 0: pid=64266: Fri Dec 6 20:36:16 2024 00:09:00.343 read: IOPS=20.6k, BW=80.3MiB/s (84.2MB/s)(161MiB/2001msec) 00:09:00.343 slat (nsec): min=3394, max=60109, avg=5252.85, stdev=2533.10 00:09:00.343 clat (usec): min=220, max=111263, avg=3016.19, stdev=3043.02 00:09:00.343 lat (usec): min=225, max=111277, avg=3021.44, stdev=3043.61 00:09:00.343 clat percentiles (msec): 00:09:00.343 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:09:00.343 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:09:00.343 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 5], 95.00th=[ 6], 00:09:00.343 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 10], 99.95th=[ 109], 00:09:00.343 | 99.99th=[ 109] 00:09:00.343 bw ( KiB/s): min=73632, max=89688, per=100.00%, avg=82834.67, stdev=8281.81, samples=3 00:09:00.343 iops : min=18408, max=22422, avg=20708.67, stdev=2070.45, samples=3 00:09:00.343 write: IOPS=20.5k, BW=80.1MiB/s (83.9MB/s)(160MiB/2001msec); 0 zone resets 00:09:00.343 slat (nsec): min=3549, max=73397, avg=5613.80, stdev=2608.70 00:09:00.343 clat (usec): min=230, max=115852, avg=3199.56, stdev=5418.31 00:09:00.343 lat (usec): min=235, max=115861, avg=3205.17, stdev=5418.81 00:09:00.343 clat percentiles (msec): 00:09:00.343 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:09:00.343 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:09:00.343 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 5], 95.00th=[ 6], 00:09:00.343 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 115], 99.95th=[ 116], 00:09:00.343 | 99.99th=[ 116] 00:09:00.343 bw ( KiB/s): min=74576, max=89160, per=100.00%, avg=82866.67, stdev=7494.35, samples=3 00:09:00.343 iops : min=18644, max=22290, avg=20716.67, stdev=1873.59, samples=3 00:09:00.343 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:00.343 lat (msec) : 2=0.94%, 4=88.46%, 10=10.38%, 250=0.16% 00:09:00.343 cpu : usr=99.20%, sys=0.00%, ctx=3, majf=0, minf=608 00:09:00.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:00.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.343 issued rwts: total=41122,41012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.343 00:09:00.343 Run status group 0 (all jobs): 00:09:00.343 READ: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=161MiB (168MB), run=2001-2001msec 00:09:00.343 WRITE: bw=80.1MiB/s (83.9MB/s), 80.1MiB/s-80.1MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:00.343 ----------------------------------------------------- 00:09:00.343 Suppressions used: 00:09:00.343 count bytes template 00:09:00.343 1 32 /usr/src/fio/parse.c 00:09:00.343 1 8 libtcmalloc_minimal.so 00:09:00.343 ----------------------------------------------------- 00:09:00.343 00:09:00.343 20:36:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:00.343 20:36:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.343 20:36:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.343 20:36:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.343 20:36:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.343 20:36:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.343 20:36:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.343 20:36:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.343 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:00.344 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.344 20:36:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.602 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.602 fio-3.35 00:09:00.602 Starting 1 thread 00:09:10.577 00:09:10.577 test: (groupid=0, jobs=1): err= 0: pid=64332: Fri Dec 6 20:36:26 2024 00:09:10.577 read: IOPS=21.3k, BW=83.2MiB/s (87.2MB/s)(166MiB/2001msec) 00:09:10.577 slat (nsec): min=3352, max=71379, avg=5224.22, stdev=2506.82 00:09:10.577 clat (usec): min=275, max=8578, avg=3001.49, stdev=982.27 00:09:10.577 lat (usec): min=281, max=8584, avg=3006.72, stdev=983.62 00:09:10.577 clat percentiles (usec): 00:09:10.577 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2474], 00:09:10.577 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2704], 00:09:10.577 | 70.00th=[ 2835], 80.00th=[ 3130], 90.00th=[ 4424], 95.00th=[ 5538], 00:09:10.577 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 8094], 00:09:10.577 | 99.99th=[ 8455] 00:09:10.577 bw ( KiB/s): min=74232, max=92216, per=100.00%, avg=85994.67, stdev=10192.44, samples=3 00:09:10.577 iops : min=18558, max=23054, avg=21498.67, stdev=2548.11, samples=3 00:09:10.577 write: IOPS=21.1k, BW=82.6MiB/s (86.6MB/s)(165MiB/2001msec); 0 zone resets 00:09:10.577 slat (usec): min=3, max=120, avg= 5.56, stdev= 2.65 00:09:10.577 clat (usec): min=251, max=8855, avg=3006.13, stdev=983.01 00:09:10.577 lat (usec): min=257, max=8893, avg=3011.68, stdev=984.35 00:09:10.577 clat percentiles (usec): 00:09:10.577 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2474], 00:09:10.577 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2704], 00:09:10.577 | 70.00th=[ 2868], 80.00th=[ 3130], 90.00th=[ 4424], 95.00th=[ 5538], 00:09:10.577 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 7898], 99.95th=[ 8225], 00:09:10.577 | 99.99th=[ 8356] 00:09:10.577 bw ( KiB/s): min=76080, max=91256, per=100.00%, avg=86157.33, stdev=8727.43, samples=3 00:09:10.577 iops : min=19020, max=22814, avg=21539.33, stdev=2181.86, samples=3 00:09:10.577 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:09:10.577 lat (msec) : 2=0.25%, 4=87.48%, 10=12.22% 00:09:10.577 cpu : usr=99.10%, sys=0.05%, ctx=7, majf=0, minf=606 00:09:10.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.577 issued rwts: total=42609,42320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.577 00:09:10.577 Run status group 0 (all jobs): 00:09:10.577 READ: bw=83.2MiB/s (87.2MB/s), 83.2MiB/s-83.2MiB/s (87.2MB/s-87.2MB/s), io=166MiB (175MB), run=2001-2001msec 00:09:10.578 WRITE: bw=82.6MiB/s (86.6MB/s), 82.6MiB/s-82.6MiB/s (86.6MB/s-86.6MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:10.578 ----------------------------------------------------- 00:09:10.578 Suppressions used: 00:09:10.578 count bytes template 00:09:10.578 1 32 /usr/src/fio/parse.c 00:09:10.578 1 8 libtcmalloc_minimal.so 00:09:10.578 ----------------------------------------------------- 00:09:10.578 00:09:10.578 20:36:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:10.578 20:36:26 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:10.578 ************************************ 00:09:10.578 END TEST nvme_fio 00:09:10.578 ************************************ 00:09:10.578 00:09:10.578 real 0m30.487s 00:09:10.578 user 0m20.677s 00:09:10.578 sys 0m16.537s 00:09:10.578 20:36:26 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.578 20:36:26 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:10.578 00:09:10.578 real 1m40.123s 00:09:10.578 user 3m41.963s 00:09:10.578 sys 0m27.613s 00:09:10.578 20:36:26 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.578 ************************************ 00:09:10.578 END TEST nvme 00:09:10.578 ************************************ 00:09:10.578 20:36:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.578 20:36:26 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:10.578 20:36:26 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:10.578 20:36:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.578 20:36:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.578 20:36:26 -- common/autotest_common.sh@10 -- # set +x 00:09:10.578 ************************************ 00:09:10.578 START TEST nvme_scc 00:09:10.578 ************************************ 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:10.578 * Looking for test storage... 00:09:10.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.578 --rc genhtml_branch_coverage=1 00:09:10.578 --rc genhtml_function_coverage=1 00:09:10.578 --rc genhtml_legend=1 00:09:10.578 --rc geninfo_all_blocks=1 00:09:10.578 --rc geninfo_unexecuted_blocks=1 00:09:10.578 00:09:10.578 ' 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.578 --rc genhtml_branch_coverage=1 00:09:10.578 --rc genhtml_function_coverage=1 00:09:10.578 --rc genhtml_legend=1 00:09:10.578 --rc geninfo_all_blocks=1 00:09:10.578 --rc geninfo_unexecuted_blocks=1 00:09:10.578 00:09:10.578 ' 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.578 --rc genhtml_branch_coverage=1 00:09:10.578 --rc genhtml_function_coverage=1 00:09:10.578 --rc genhtml_legend=1 00:09:10.578 --rc geninfo_all_blocks=1 00:09:10.578 --rc geninfo_unexecuted_blocks=1 00:09:10.578 00:09:10.578 ' 00:09:10.578 20:36:26 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.578 --rc genhtml_branch_coverage=1 00:09:10.578 --rc genhtml_function_coverage=1 00:09:10.578 --rc genhtml_legend=1 00:09:10.578 --rc geninfo_all_blocks=1 00:09:10.578 --rc geninfo_unexecuted_blocks=1 00:09:10.578 00:09:10.578 ' 00:09:10.578 20:36:26 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.578 20:36:26 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.578 20:36:26 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.578 20:36:26 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.578 20:36:26 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.578 20:36:26 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:10.578 20:36:26 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:10.578 20:36:26 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:10.578 20:36:26 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.578 20:36:26 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:10.578 20:36:26 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:10.578 20:36:26 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:10.578 20:36:26 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:10.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.578 Waiting for block devices as requested 00:09:10.578 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.578 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.578 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.578 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.851 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:15.851 20:36:32 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:15.851 20:36:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:15.851 20:36:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:15.851 20:36:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.851 20:36:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.851 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:15.852 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.853 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:15.854 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:15.855 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:15.856 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.857 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:15.858 20:36:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:15.858 20:36:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:15.858 20:36:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.858 20:36:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.858 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.859 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:15.860 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.861 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.862 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.863 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:15.864 20:36:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:15.864 20:36:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:15.864 20:36:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.864 20:36:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.864 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.865 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:15.866 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:15.867 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.868 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.869 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.870 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:15.871 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.872 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.873 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.874 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.875 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.137 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:16.138 20:36:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.138 20:36:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:16.138 20:36:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.138 20:36:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.138 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:16.141 20:36:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:16.141 20:36:33 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:16.142 20:36:33 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:16.142 20:36:33 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:16.142 20:36:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:16.142 20:36:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:16.142 20:36:33 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:16.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:16.966 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.966 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.966 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.966 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.966 20:36:34 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:16.966 20:36:34 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:16.966 20:36:34 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.966 20:36:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:16.966 ************************************ 00:09:16.966 START TEST nvme_simple_copy 00:09:16.966 ************************************ 00:09:16.966 20:36:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:17.224 Initializing NVMe Controllers 00:09:17.224 Attaching to 0000:00:10.0 00:09:17.224 Controller supports SCC. Attached to 0000:00:10.0 00:09:17.224 Namespace ID: 1 size: 6GB 00:09:17.224 Initialization complete. 00:09:17.224 00:09:17.224 Controller QEMU NVMe Ctrl (12340 ) 00:09:17.224 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:17.224 Namespace Block Size:4096 00:09:17.224 Writing LBAs 0 to 63 with Random Data 00:09:17.224 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:17.224 LBAs matching Written Data: 64 00:09:17.224 ************************************ 00:09:17.224 END TEST nvme_simple_copy 00:09:17.224 ************************************ 00:09:17.224 00:09:17.224 real 0m0.253s 00:09:17.224 user 0m0.089s 00:09:17.224 sys 0m0.063s 00:09:17.224 20:36:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.224 20:36:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:17.224 ************************************ 00:09:17.224 END TEST nvme_scc 00:09:17.224 ************************************ 00:09:17.224 00:09:17.224 real 0m7.576s 00:09:17.224 user 0m1.035s 00:09:17.224 sys 0m1.376s 00:09:17.224 20:36:34 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.224 20:36:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:17.483 20:36:34 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:17.484 20:36:34 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:17.484 20:36:34 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:17.484 20:36:34 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:17.484 20:36:34 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:17.484 20:36:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.484 20:36:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.484 20:36:34 -- common/autotest_common.sh@10 -- # set +x 00:09:17.484 ************************************ 00:09:17.484 START TEST nvme_fdp 00:09:17.484 ************************************ 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:17.484 * Looking for test storage... 00:09:17.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.484 --rc genhtml_branch_coverage=1 00:09:17.484 --rc genhtml_function_coverage=1 00:09:17.484 --rc genhtml_legend=1 00:09:17.484 --rc geninfo_all_blocks=1 00:09:17.484 --rc geninfo_unexecuted_blocks=1 00:09:17.484 00:09:17.484 ' 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.484 --rc genhtml_branch_coverage=1 00:09:17.484 --rc genhtml_function_coverage=1 00:09:17.484 --rc genhtml_legend=1 00:09:17.484 --rc geninfo_all_blocks=1 00:09:17.484 --rc geninfo_unexecuted_blocks=1 00:09:17.484 00:09:17.484 ' 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.484 --rc genhtml_branch_coverage=1 00:09:17.484 --rc genhtml_function_coverage=1 00:09:17.484 --rc genhtml_legend=1 00:09:17.484 --rc geninfo_all_blocks=1 00:09:17.484 --rc geninfo_unexecuted_blocks=1 00:09:17.484 00:09:17.484 ' 00:09:17.484 20:36:34 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.484 --rc genhtml_branch_coverage=1 00:09:17.484 --rc genhtml_function_coverage=1 00:09:17.484 --rc genhtml_legend=1 00:09:17.484 --rc geninfo_all_blocks=1 00:09:17.484 --rc geninfo_unexecuted_blocks=1 00:09:17.484 00:09:17.484 ' 00:09:17.484 20:36:34 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.484 20:36:34 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.484 20:36:34 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.484 20:36:34 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.484 20:36:34 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.484 20:36:34 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:17.484 20:36:34 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:17.484 20:36:34 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:17.484 20:36:34 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.484 20:36:34 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:17.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.001 Waiting for block devices as requested 00:09:18.001 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.001 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.001 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.259 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.676 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:23.676 20:36:40 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:23.676 20:36:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.676 20:36:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:23.676 20:36:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.676 20:36:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:23.676 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.677 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.678 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:23.679 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.680 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.681 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.682 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:23.683 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.684 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:23.685 20:36:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.685 20:36:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:23.685 20:36:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.685 20:36:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.685 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.686 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.687 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:23.688 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:23.689 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.690 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:23.691 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.692 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:23.693 20:36:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.693 20:36:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:23.693 20:36:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.693 20:36:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.693 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:23.694 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.695 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.696 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:23.697 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.698 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:23.699 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.700 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.701 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.702 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:23.703 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:23.704 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:23.705 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.706 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.707 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.708 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.709 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:23.710 20:36:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.710 20:36:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:23.710 20:36:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.710 20:36:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.710 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:23.711 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:23.712 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.713 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:23.714 20:36:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.714 20:36:40 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:23.715 20:36:40 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:23.715 20:36:40 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:23.715 20:36:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:23.715 20:36:40 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:23.715 20:36:40 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:23.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.536 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.536 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.536 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.536 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.536 20:36:41 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:24.536 20:36:41 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:24.536 20:36:41 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.536 20:36:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:24.536 ************************************ 00:09:24.536 START TEST nvme_flexible_data_placement 00:09:24.536 ************************************ 00:09:24.536 20:36:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:24.794 Initializing NVMe Controllers 00:09:24.794 Attaching to 0000:00:13.0 00:09:24.794 Controller supports FDP Attached to 0000:00:13.0 00:09:24.794 Namespace ID: 1 Endurance Group ID: 1 00:09:24.794 Initialization complete. 00:09:24.794 00:09:24.794 ================================== 00:09:24.794 == FDP tests for Namespace: #01 == 00:09:24.794 ================================== 00:09:24.794 00:09:24.794 Get Feature: FDP: 00:09:24.794 ================= 00:09:24.794 Enabled: Yes 00:09:24.794 FDP configuration Index: 0 00:09:24.794 00:09:24.794 FDP configurations log page 00:09:24.794 =========================== 00:09:24.794 Number of FDP configurations: 1 00:09:24.794 Version: 0 00:09:24.794 Size: 112 00:09:24.794 FDP Configuration Descriptor: 0 00:09:24.794 Descriptor Size: 96 00:09:24.794 Reclaim Group Identifier format: 2 00:09:24.794 FDP Volatile Write Cache: Not Present 00:09:24.794 FDP Configuration: Valid 00:09:24.794 Vendor Specific Size: 0 00:09:24.794 Number of Reclaim Groups: 2 00:09:24.794 Number of Recalim Unit Handles: 8 00:09:24.794 Max Placement Identifiers: 128 00:09:24.794 Number of Namespaces Suppprted: 256 00:09:24.794 Reclaim unit Nominal Size: 6000000 bytes 00:09:24.794 Estimated Reclaim Unit Time Limit: Not Reported 00:09:24.794 RUH Desc #000: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #001: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #002: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #003: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #004: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #005: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #006: RUH Type: Initially Isolated 00:09:24.794 RUH Desc #007: RUH Type: Initially Isolated 00:09:24.794 00:09:24.794 FDP reclaim unit handle usage log page 00:09:24.794 ====================================== 00:09:24.794 Number of Reclaim Unit Handles: 8 00:09:24.794 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:24.794 RUH Usage Desc #001: RUH Attributes: Unused 00:09:24.794 RUH Usage Desc #002: RUH Attributes: Unused 00:09:24.794 RUH Usage Desc #003: RUH Attributes: Unused 00:09:24.794 RUH Usage Desc #004: RUH Attributes: Unused 00:09:24.794 RUH Usage Desc #005: RUH Attributes: Unused 00:09:24.794 RUH Usage Desc #006: RUH Attributes: Unused 00:09:24.794 RUH Usage Desc #007: RUH Attributes: Unused 00:09:24.794 00:09:24.794 FDP statistics log page 00:09:24.794 ======================= 00:09:24.794 Host bytes with metadata written: 1037283328 00:09:24.794 Media bytes with metadata written: 1040498688 00:09:24.794 Media bytes erased: 0 00:09:24.794 00:09:24.794 FDP Reclaim unit handle status 00:09:24.794 ============================== 00:09:24.794 Number of RUHS descriptors: 2 00:09:24.794 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000042c5 00:09:24.794 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:24.794 00:09:24.794 FDP write on placement id: 0 success 00:09:24.794 00:09:24.794 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:24.794 00:09:24.794 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:24.794 00:09:24.794 Get Feature: FDP Events for Placement handle: #0 00:09:24.794 ======================== 00:09:24.794 Number of FDP Events: 6 00:09:24.794 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:24.794 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:24.794 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:24.794 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:24.794 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:24.794 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:24.794 00:09:24.794 FDP events log page 00:09:24.794 =================== 00:09:24.794 Number of FDP events: 1 00:09:24.794 FDP Event #0: 00:09:24.794 Event Type: RU Not Written to Capacity 00:09:24.794 Placement Identifier: Valid 00:09:24.794 NSID: Valid 00:09:24.794 Location: Valid 00:09:24.794 Placement Identifier: 0 00:09:24.794 Event Timestamp: 7 00:09:24.794 Namespace Identifier: 1 00:09:24.794 Reclaim Group Identifier: 0 00:09:24.794 Reclaim Unit Handle Identifier: 0 00:09:24.794 00:09:24.794 FDP test passed 00:09:24.794 00:09:24.794 real 0m0.248s 00:09:24.794 user 0m0.089s 00:09:24.794 sys 0m0.058s 00:09:24.794 ************************************ 00:09:24.794 END TEST nvme_flexible_data_placement 00:09:24.794 ************************************ 00:09:24.794 20:36:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.794 20:36:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:24.794 ************************************ 00:09:24.794 END TEST nvme_fdp 00:09:24.794 ************************************ 00:09:24.794 00:09:24.794 real 0m7.514s 00:09:24.794 user 0m1.070s 00:09:24.794 sys 0m1.344s 00:09:24.794 20:36:41 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.794 20:36:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:24.794 20:36:41 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:24.794 20:36:41 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:24.794 20:36:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.794 20:36:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.794 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:25.053 ************************************ 00:09:25.053 START TEST nvme_rpc 00:09:25.053 ************************************ 00:09:25.053 20:36:41 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:25.053 * Looking for test storage... 00:09:25.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.053 20:36:41 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.053 20:36:41 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.053 20:36:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.053 20:36:42 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.053 --rc genhtml_branch_coverage=1 00:09:25.053 --rc genhtml_function_coverage=1 00:09:25.053 --rc genhtml_legend=1 00:09:25.053 --rc geninfo_all_blocks=1 00:09:25.053 --rc geninfo_unexecuted_blocks=1 00:09:25.053 00:09:25.053 ' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.053 --rc genhtml_branch_coverage=1 00:09:25.053 --rc genhtml_function_coverage=1 00:09:25.053 --rc genhtml_legend=1 00:09:25.053 --rc geninfo_all_blocks=1 00:09:25.053 --rc geninfo_unexecuted_blocks=1 00:09:25.053 00:09:25.053 ' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.053 --rc genhtml_branch_coverage=1 00:09:25.053 --rc genhtml_function_coverage=1 00:09:25.053 --rc genhtml_legend=1 00:09:25.053 --rc geninfo_all_blocks=1 00:09:25.053 --rc geninfo_unexecuted_blocks=1 00:09:25.053 00:09:25.053 ' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.053 --rc genhtml_branch_coverage=1 00:09:25.053 --rc genhtml_function_coverage=1 00:09:25.053 --rc genhtml_legend=1 00:09:25.053 --rc geninfo_all_blocks=1 00:09:25.053 --rc geninfo_unexecuted_blocks=1 00:09:25.053 00:09:25.053 ' 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65704 00:09:25.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:25.053 20:36:42 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65704 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65704 ']' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.053 20:36:42 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.311 [2024-12-06 20:36:42.192391] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:09:25.311 [2024-12-06 20:36:42.192511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65704 ] 00:09:25.311 [2024-12-06 20:36:42.349789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.569 [2024-12-06 20:36:42.453189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.569 [2024-12-06 20:36:42.453483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.134 20:36:43 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.134 20:36:43 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:26.134 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:26.391 Nvme0n1 00:09:26.391 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:26.391 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:26.391 request: 00:09:26.391 { 00:09:26.391 "bdev_name": "Nvme0n1", 00:09:26.391 "filename": "non_existing_file", 00:09:26.391 "method": "bdev_nvme_apply_firmware", 00:09:26.391 "req_id": 1 00:09:26.391 } 00:09:26.391 Got JSON-RPC error response 00:09:26.391 response: 00:09:26.391 { 00:09:26.391 "code": -32603, 00:09:26.391 "message": "open file failed." 00:09:26.391 } 00:09:26.391 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:26.391 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:26.391 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:26.648 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:26.648 20:36:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65704 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65704 ']' 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65704 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65704 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.648 killing process with pid 65704 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65704' 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65704 00:09:26.648 20:36:43 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65704 00:09:28.545 ************************************ 00:09:28.545 END TEST nvme_rpc 00:09:28.545 ************************************ 00:09:28.545 00:09:28.545 real 0m3.300s 00:09:28.545 user 0m6.323s 00:09:28.545 sys 0m0.492s 00:09:28.545 20:36:45 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.545 20:36:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 20:36:45 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:28.545 20:36:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.545 20:36:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.545 20:36:45 -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 ************************************ 00:09:28.545 START TEST nvme_rpc_timeouts 00:09:28.545 ************************************ 00:09:28.545 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:28.545 * Looking for test storage... 00:09:28.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.545 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.545 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.545 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.545 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:28.545 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.546 20:36:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.546 --rc genhtml_branch_coverage=1 00:09:28.546 --rc genhtml_function_coverage=1 00:09:28.546 --rc genhtml_legend=1 00:09:28.546 --rc geninfo_all_blocks=1 00:09:28.546 --rc geninfo_unexecuted_blocks=1 00:09:28.546 00:09:28.546 ' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.546 --rc genhtml_branch_coverage=1 00:09:28.546 --rc genhtml_function_coverage=1 00:09:28.546 --rc genhtml_legend=1 00:09:28.546 --rc geninfo_all_blocks=1 00:09:28.546 --rc geninfo_unexecuted_blocks=1 00:09:28.546 00:09:28.546 ' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.546 --rc genhtml_branch_coverage=1 00:09:28.546 --rc genhtml_function_coverage=1 00:09:28.546 --rc genhtml_legend=1 00:09:28.546 --rc geninfo_all_blocks=1 00:09:28.546 --rc geninfo_unexecuted_blocks=1 00:09:28.546 00:09:28.546 ' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.546 --rc genhtml_branch_coverage=1 00:09:28.546 --rc genhtml_function_coverage=1 00:09:28.546 --rc genhtml_legend=1 00:09:28.546 --rc geninfo_all_blocks=1 00:09:28.546 --rc geninfo_unexecuted_blocks=1 00:09:28.546 00:09:28.546 ' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65769 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65769 00:09:28.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65801 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:28.546 20:36:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65801 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65801 ']' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.546 20:36:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:28.546 [2024-12-06 20:36:45.488205] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:09:28.546 [2024-12-06 20:36:45.488475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65801 ] 00:09:28.546 [2024-12-06 20:36:45.647846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.803 [2024-12-06 20:36:45.748033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.803 [2024-12-06 20:36:45.748256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.367 20:36:46 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.367 20:36:46 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:29.367 Checking default timeout settings: 00:09:29.367 20:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:29.367 20:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:29.626 Making settings changes with rpc: 00:09:29.626 20:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:29.626 20:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:29.885 Check default vs. modified settings: 00:09:29.885 20:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:29.885 20:36:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.143 Setting action_on_timeout is changed as expected. 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.143 Setting timeout_us is changed as expected. 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.143 Setting timeout_admin_us is changed as expected. 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65769 /tmp/settings_modified_65769 00:09:30.143 20:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65801 00:09:30.143 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65801 ']' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65801 00:09:30.143 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:30.143 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.143 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65801 00:09:30.143 killing process with pid 65801 00:09:30.143 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.144 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.144 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65801' 00:09:30.144 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65801 00:09:30.144 20:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65801 00:09:32.040 RPC TIMEOUT SETTING TEST PASSED. 00:09:32.040 20:36:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:32.040 00:09:32.040 real 0m3.513s 00:09:32.040 user 0m6.860s 00:09:32.040 sys 0m0.456s 00:09:32.040 ************************************ 00:09:32.040 END TEST nvme_rpc_timeouts 00:09:32.040 ************************************ 00:09:32.040 20:36:48 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.040 20:36:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 20:36:48 -- spdk/autotest.sh@239 -- # uname -s 00:09:32.040 20:36:48 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:32.040 20:36:48 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:32.040 20:36:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.040 20:36:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.040 20:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:32.040 ************************************ 00:09:32.040 START TEST sw_hotplug 00:09:32.040 ************************************ 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:32.040 * Looking for test storage... 00:09:32.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.040 20:36:48 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.040 --rc genhtml_branch_coverage=1 00:09:32.040 --rc genhtml_function_coverage=1 00:09:32.040 --rc genhtml_legend=1 00:09:32.040 --rc geninfo_all_blocks=1 00:09:32.040 --rc geninfo_unexecuted_blocks=1 00:09:32.040 00:09:32.040 ' 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.040 --rc genhtml_branch_coverage=1 00:09:32.040 --rc genhtml_function_coverage=1 00:09:32.040 --rc genhtml_legend=1 00:09:32.040 --rc geninfo_all_blocks=1 00:09:32.040 --rc geninfo_unexecuted_blocks=1 00:09:32.040 00:09:32.040 ' 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.040 --rc genhtml_branch_coverage=1 00:09:32.040 --rc genhtml_function_coverage=1 00:09:32.040 --rc genhtml_legend=1 00:09:32.040 --rc geninfo_all_blocks=1 00:09:32.040 --rc geninfo_unexecuted_blocks=1 00:09:32.040 00:09:32.040 ' 00:09:32.040 20:36:48 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.040 --rc genhtml_branch_coverage=1 00:09:32.040 --rc genhtml_function_coverage=1 00:09:32.040 --rc genhtml_legend=1 00:09:32.040 --rc geninfo_all_blocks=1 00:09:32.040 --rc geninfo_unexecuted_blocks=1 00:09:32.040 00:09:32.040 ' 00:09:32.040 20:36:48 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.298 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:32.298 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:32.298 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:32.298 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:32.298 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:32.298 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:32.298 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:32.298 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:32.298 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:32.299 20:36:49 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:32.299 20:36:49 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:32.299 20:36:49 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:32.299 20:36:49 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:32.299 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:32.299 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:32.299 20:36:49 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.863 Waiting for block devices as requested 00:09:32.863 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.863 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.121 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.121 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.378 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.378 20:36:55 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:38.378 20:36:55 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.378 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:38.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.663 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:38.663 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:38.925 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.925 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.925 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:38.925 20:36:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66661 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:39.183 20:36:56 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:39.183 20:36:56 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:39.183 20:36:56 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:39.183 20:36:56 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:39.183 20:36:56 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:39.183 20:36:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:39.183 Initializing NVMe Controllers 00:09:39.183 Attaching to 0000:00:10.0 00:09:39.183 Attaching to 0000:00:11.0 00:09:39.183 Attached to 0000:00:10.0 00:09:39.183 Attached to 0000:00:11.0 00:09:39.183 Initialization complete. Starting I/O... 00:09:39.183 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:39.183 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:39.183 00:09:40.555 QEMU NVMe Ctrl (12340 ): 2636 I/Os completed (+2636) 00:09:40.555 QEMU NVMe Ctrl (12341 ): 2666 I/Os completed (+2666) 00:09:40.555 00:09:41.503 QEMU NVMe Ctrl (12340 ): 6448 I/Os completed (+3812) 00:09:41.503 QEMU NVMe Ctrl (12341 ): 6316 I/Os completed (+3650) 00:09:41.503 00:09:42.438 QEMU NVMe Ctrl (12340 ): 9814 I/Os completed (+3366) 00:09:42.438 QEMU NVMe Ctrl (12341 ): 9670 I/Os completed (+3354) 00:09:42.438 00:09:43.373 QEMU NVMe Ctrl (12340 ): 13052 I/Os completed (+3238) 00:09:43.373 QEMU NVMe Ctrl (12341 ): 12905 I/Os completed (+3235) 00:09:43.373 00:09:44.309 QEMU NVMe Ctrl (12340 ): 16190 I/Os completed (+3138) 00:09:44.309 QEMU NVMe Ctrl (12341 ): 15983 I/Os completed (+3078) 00:09:44.309 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:45.245 [2024-12-06 20:37:02.104230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:45.245 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:45.245 [2024-12-06 20:37:02.105452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.105497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.105514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.105532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:45.245 [2024-12-06 20:37:02.107810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.107937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.107971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.108061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:45.245 EAL: Scan for (pci) bus failed. 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:45.245 [2024-12-06 20:37:02.129164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:45.245 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:45.245 [2024-12-06 20:37:02.130195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.130236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.130257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.130272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:45.245 [2024-12-06 20:37:02.131883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.132067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.132103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 [2024-12-06 20:37:02.132199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:45.245 EAL: Cannot open sysfs resource 00:09:45.245 EAL: pci_scan_one(): cannot parse resource 00:09:45.245 EAL: Scan for (pci) bus failed. 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:45.245 Attaching to 0000:00:10.0 00:09:45.245 Attached to 0000:00:10.0 00:09:45.245 QEMU NVMe Ctrl (12340 ): 20 I/Os completed (+20) 00:09:45.245 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:45.245 20:37:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:45.245 Attaching to 0000:00:11.0 00:09:45.245 Attached to 0000:00:11.0 00:09:46.172 QEMU NVMe Ctrl (12340 ): 3651 I/Os completed (+3631) 00:09:46.172 QEMU NVMe Ctrl (12341 ): 3600 I/Os completed (+3600) 00:09:46.172 00:09:47.591 QEMU NVMe Ctrl (12340 ): 6959 I/Os completed (+3308) 00:09:47.591 QEMU NVMe Ctrl (12341 ): 7289 I/Os completed (+3689) 00:09:47.591 00:09:48.521 QEMU NVMe Ctrl (12340 ): 10602 I/Os completed (+3643) 00:09:48.521 QEMU NVMe Ctrl (12341 ): 11586 I/Os completed (+4297) 00:09:48.521 00:09:49.452 QEMU NVMe Ctrl (12340 ): 13803 I/Os completed (+3201) 00:09:49.452 QEMU NVMe Ctrl (12341 ): 15035 I/Os completed (+3449) 00:09:49.452 00:09:50.389 QEMU NVMe Ctrl (12340 ): 17381 I/Os completed (+3578) 00:09:50.389 QEMU NVMe Ctrl (12341 ): 19166 I/Os completed (+4131) 00:09:50.389 00:09:51.320 QEMU NVMe Ctrl (12340 ): 20765 I/Os completed (+3384) 00:09:51.320 QEMU NVMe Ctrl (12341 ): 22496 I/Os completed (+3330) 00:09:51.320 00:09:52.253 QEMU NVMe Ctrl (12340 ): 24590 I/Os completed (+3825) 00:09:52.253 QEMU NVMe Ctrl (12341 ): 26632 I/Os completed (+4136) 00:09:52.253 00:09:53.197 QEMU NVMe Ctrl (12340 ): 28213 I/Os completed (+3623) 00:09:53.197 QEMU NVMe Ctrl (12341 ): 30412 I/Os completed (+3780) 00:09:53.197 00:09:54.575 QEMU NVMe Ctrl (12340 ): 31470 I/Os completed (+3257) 00:09:54.575 QEMU NVMe Ctrl (12341 ): 33616 I/Os completed (+3204) 00:09:54.575 00:09:55.514 QEMU NVMe Ctrl (12340 ): 34612 I/Os completed (+3142) 00:09:55.514 QEMU NVMe Ctrl (12341 ): 37137 I/Os completed (+3521) 00:09:55.514 00:09:56.458 QEMU NVMe Ctrl (12340 ): 38299 I/Os completed (+3687) 00:09:56.458 QEMU NVMe Ctrl (12341 ): 40823 I/Os completed (+3686) 00:09:56.458 00:09:57.397 QEMU NVMe Ctrl (12340 ): 41928 I/Os completed (+3629) 00:09:57.397 QEMU NVMe Ctrl (12341 ): 44445 I/Os completed (+3622) 00:09:57.397 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.398 [2024-12-06 20:37:14.377714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:57.398 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:57.398 [2024-12-06 20:37:14.378979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.379053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.379089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.379124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:57.398 [2024-12-06 20:37:14.381109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.381228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.381299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.381330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.398 [2024-12-06 20:37:14.402052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:57.398 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:57.398 [2024-12-06 20:37:14.403216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.403326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.403366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.403418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:57.398 [2024-12-06 20:37:14.405247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.405346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.405380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 [2024-12-06 20:37:14.405442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.398 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:57.398 EAL: Scan for (pci) bus failed. 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.398 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.656 Attaching to 0000:00:10.0 00:09:57.656 Attached to 0000:00:10.0 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.656 20:37:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:57.656 Attaching to 0000:00:11.0 00:09:57.656 Attached to 0000:00:11.0 00:09:58.221 QEMU NVMe Ctrl (12340 ): 2237 I/Os completed (+2237) 00:09:58.221 QEMU NVMe Ctrl (12341 ): 2024 I/Os completed (+2024) 00:09:58.221 00:09:59.597 QEMU NVMe Ctrl (12340 ): 5255 I/Os completed (+3018) 00:09:59.597 QEMU NVMe Ctrl (12341 ): 5085 I/Os completed (+3061) 00:09:59.597 00:10:00.530 QEMU NVMe Ctrl (12340 ): 8385 I/Os completed (+3130) 00:10:00.530 QEMU NVMe Ctrl (12341 ): 8184 I/Os completed (+3099) 00:10:00.530 00:10:01.465 QEMU NVMe Ctrl (12340 ): 11559 I/Os completed (+3174) 00:10:01.465 QEMU NVMe Ctrl (12341 ): 11490 I/Os completed (+3306) 00:10:01.465 00:10:02.400 QEMU NVMe Ctrl (12340 ): 14693 I/Os completed (+3134) 00:10:02.400 QEMU NVMe Ctrl (12341 ): 14625 I/Os completed (+3135) 00:10:02.400 00:10:03.333 QEMU NVMe Ctrl (12340 ): 18284 I/Os completed (+3591) 00:10:03.333 QEMU NVMe Ctrl (12341 ): 18235 I/Os completed (+3610) 00:10:03.333 00:10:04.268 QEMU NVMe Ctrl (12340 ): 21974 I/Os completed (+3690) 00:10:04.268 QEMU NVMe Ctrl (12341 ): 21885 I/Os completed (+3650) 00:10:04.268 00:10:05.201 QEMU NVMe Ctrl (12340 ): 25540 I/Os completed (+3566) 00:10:05.202 QEMU NVMe Ctrl (12341 ): 25454 I/Os completed (+3569) 00:10:05.202 00:10:06.572 QEMU NVMe Ctrl (12340 ): 28656 I/Os completed (+3116) 00:10:06.572 QEMU NVMe Ctrl (12341 ): 28605 I/Os completed (+3151) 00:10:06.572 00:10:07.501 QEMU NVMe Ctrl (12340 ): 31800 I/Os completed (+3144) 00:10:07.501 QEMU NVMe Ctrl (12341 ): 31670 I/Os completed (+3065) 00:10:07.501 00:10:08.464 QEMU NVMe Ctrl (12340 ): 35242 I/Os completed (+3442) 00:10:08.464 QEMU NVMe Ctrl (12341 ): 35210 I/Os completed (+3540) 00:10:08.464 00:10:09.410 QEMU NVMe Ctrl (12340 ): 38498 I/Os completed (+3256) 00:10:09.410 QEMU NVMe Ctrl (12341 ): 38417 I/Os completed (+3207) 00:10:09.410 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.668 [2024-12-06 20:37:26.637859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:09.668 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.668 [2024-12-06 20:37:26.639167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.639298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.639334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.639428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.668 [2024-12-06 20:37:26.641459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.641566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.641585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.641601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.668 [2024-12-06 20:37:26.661725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:09.668 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.668 [2024-12-06 20:37:26.662860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.663035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.663059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.663075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.668 [2024-12-06 20:37:26.664779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.664819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.664837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 [2024-12-06 20:37:26.664849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.668 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.668 EAL: Scan for (pci) bus failed. 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.668 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.926 Attaching to 0000:00:10.0 00:10:09.926 Attached to 0000:00:10.0 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.926 20:37:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.926 Attaching to 0000:00:11.0 00:10:09.926 Attached to 0000:00:11.0 00:10:09.926 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.926 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.926 [2024-12-06 20:37:26.928793] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:22.120 20:37:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:22.120 20:37:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.120 20:37:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.82 00:10:22.120 20:37:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.82 00:10:22.120 20:37:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:22.120 20:37:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:10:22.120 20:37:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:10:22.120 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 20:37:38 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:28.682 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66661 00:10:28.683 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66661) - No such process 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66661 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67210 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:28.683 20:37:44 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67210 00:10:28.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.683 20:37:44 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67210 ']' 00:10:28.683 20:37:44 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.683 20:37:44 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.683 20:37:44 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.683 20:37:44 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.683 20:37:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.683 [2024-12-06 20:37:45.009654] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:10:28.683 [2024-12-06 20:37:45.009936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67210 ] 00:10:28.683 [2024-12-06 20:37:45.170655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.683 [2024-12-06 20:37:45.266558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:28.942 20:37:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:28.942 20:37:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:35.498 20:37:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.498 20:37:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.498 20:37:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:35.498 20:37:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:35.498 [2024-12-06 20:37:51.955066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:35.498 [2024-12-06 20:37:51.956520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.498 [2024-12-06 20:37:51.956556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.498 [2024-12-06 20:37:51.956569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.498 [2024-12-06 20:37:51.956588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.498 [2024-12-06 20:37:51.956595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.498 [2024-12-06 20:37:51.956604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.498 [2024-12-06 20:37:51.956611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.498 [2024-12-06 20:37:51.956619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.498 [2024-12-06 20:37:51.956626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.499 [2024-12-06 20:37:51.956637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.499 [2024-12-06 20:37:51.956643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.499 [2024-12-06 20:37:51.956651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.499 [2024-12-06 20:37:52.355066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:35.499 [2024-12-06 20:37:52.356406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.499 [2024-12-06 20:37:52.356439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.499 [2024-12-06 20:37:52.356451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.499 [2024-12-06 20:37:52.356466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.499 [2024-12-06 20:37:52.356475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.499 [2024-12-06 20:37:52.356482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.499 [2024-12-06 20:37:52.356491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.499 [2024-12-06 20:37:52.356497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.499 [2024-12-06 20:37:52.356505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.499 [2024-12-06 20:37:52.356512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.499 [2024-12-06 20:37:52.356519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.499 [2024-12-06 20:37:52.356526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:35.499 20:37:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.499 20:37:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.499 20:37:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.499 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:35.757 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:35.757 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.757 20:37:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.962 20:38:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.962 20:38:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.962 20:38:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.962 [2024-12-06 20:38:04.755253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:47.962 [2024-12-06 20:38:04.756757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.962 [2024-12-06 20:38:04.756794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.962 [2024-12-06 20:38:04.756805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-06 20:38:04.756821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.962 [2024-12-06 20:38:04.756828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.962 [2024-12-06 20:38:04.756837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-06 20:38:04.756844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.962 [2024-12-06 20:38:04.756852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.962 [2024-12-06 20:38:04.756858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-06 20:38:04.756867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.962 [2024-12-06 20:38:04.756873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.962 [2024-12-06 20:38:04.756881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.962 20:38:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.962 20:38:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.962 20:38:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:47.962 20:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:48.219 [2024-12-06 20:38:05.155250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:48.219 [2024-12-06 20:38:05.156478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.219 [2024-12-06 20:38:05.156513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:48.219 [2024-12-06 20:38:05.156530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:48.219 [2024-12-06 20:38:05.156545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.219 [2024-12-06 20:38:05.156554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:48.219 [2024-12-06 20:38:05.156561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:48.219 [2024-12-06 20:38:05.156570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.219 [2024-12-06 20:38:05.156577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:48.219 [2024-12-06 20:38:05.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:48.219 [2024-12-06 20:38:05.156592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.219 [2024-12-06 20:38:05.156600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:48.219 [2024-12-06 20:38:05.156607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:48.219 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:48.219 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:48.219 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:48.219 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:48.219 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:48.219 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:48.219 20:38:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.219 20:38:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.219 20:38:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.475 20:38:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.674 20:38:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.674 20:38:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 20:38:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.674 [2024-12-06 20:38:17.655446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:00.674 [2024-12-06 20:38:17.656835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.674 [2024-12-06 20:38:17.656870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.674 [2024-12-06 20:38:17.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.674 [2024-12-06 20:38:17.656907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.674 [2024-12-06 20:38:17.656915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.674 [2024-12-06 20:38:17.656925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.674 [2024-12-06 20:38:17.656933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.674 [2024-12-06 20:38:17.656941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.674 [2024-12-06 20:38:17.656947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.674 [2024-12-06 20:38:17.656955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.674 [2024-12-06 20:38:17.656961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.674 [2024-12-06 20:38:17.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.674 20:38:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.674 20:38:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 20:38:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:00.674 20:38:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:01.241 [2024-12-06 20:38:18.155447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:01.241 [2024-12-06 20:38:18.156637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.241 [2024-12-06 20:38:18.156672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.241 [2024-12-06 20:38:18.156683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.241 [2024-12-06 20:38:18.156698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.241 [2024-12-06 20:38:18.156707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.241 [2024-12-06 20:38:18.156714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.241 [2024-12-06 20:38:18.156722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.241 [2024-12-06 20:38:18.156729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.241 [2024-12-06 20:38:18.156738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.241 [2024-12-06 20:38:18.156745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.241 [2024-12-06 20:38:18.156753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.241 [2024-12-06 20:38:18.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.241 20:38:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.241 20:38:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.241 20:38:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.241 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.499 20:38:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.686 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.686 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.686 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.686 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.68 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.68 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.68 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.68 2 00:11:13.687 remove_attach_helper took 44.68s to complete (handling 2 nvme drive(s)) 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:13.687 20:38:30 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:13.687 20:38:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.263 20:38:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.263 20:38:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.263 20:38:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:20.263 20:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:20.263 [2024-12-06 20:38:36.661060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:20.263 [2024-12-06 20:38:36.661991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.263 [2024-12-06 20:38:36.662028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:36.662039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 [2024-12-06 20:38:36.662056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:36.662064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:36.662072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 [2024-12-06 20:38:36.662080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:36.662088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:36.662094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 [2024-12-06 20:38:36.662103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:36.662109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:36.662121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.264 20:38:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.264 20:38:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.264 [2024-12-06 20:38:37.161057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:20.264 [2024-12-06 20:38:37.161975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:37.162004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:37.162015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 [2024-12-06 20:38:37.162029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:37.162038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:37.162044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 [2024-12-06 20:38:37.162053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:37.162059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:37.162067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 [2024-12-06 20:38:37.162074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.264 [2024-12-06 20:38:37.162081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.264 [2024-12-06 20:38:37.162088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.264 20:38:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:20.264 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.831 20:38:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.831 20:38:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.831 20:38:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.831 20:38:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.044 20:38:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.044 20:38:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.044 20:38:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:33.044 20:38:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.044 20:38:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.044 20:38:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.044 20:38:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:33.044 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:33.044 [2024-12-06 20:38:50.061278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:33.044 [2024-12-06 20:38:50.062328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.044 [2024-12-06 20:38:50.062363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.044 [2024-12-06 20:38:50.062374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.044 [2024-12-06 20:38:50.062392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.044 [2024-12-06 20:38:50.062400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.044 [2024-12-06 20:38:50.062408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.044 [2024-12-06 20:38:50.062416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.044 [2024-12-06 20:38:50.062424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.044 [2024-12-06 20:38:50.062430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.044 [2024-12-06 20:38:50.062438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.044 [2024-12-06 20:38:50.062445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.044 [2024-12-06 20:38:50.062453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.669 [2024-12-06 20:38:50.461291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:33.669 [2024-12-06 20:38:50.462553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.669 [2024-12-06 20:38:50.462586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.669 [2024-12-06 20:38:50.462598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.669 [2024-12-06 20:38:50.462613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.669 [2024-12-06 20:38:50.462625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.669 [2024-12-06 20:38:50.462632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.669 [2024-12-06 20:38:50.462641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.669 [2024-12-06 20:38:50.462648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.669 [2024-12-06 20:38:50.462656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.669 [2024-12-06 20:38:50.462663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.669 [2024-12-06 20:38:50.462670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.669 [2024-12-06 20:38:50.462677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.669 20:38:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.669 20:38:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.669 20:38:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.669 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:33.925 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:33.925 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.925 20:38:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.135 20:39:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.135 20:39:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.135 20:39:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.135 20:39:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.135 20:39:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.135 20:39:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:46.135 20:39:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.135 [2024-12-06 20:39:02.961533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:46.135 [2024-12-06 20:39:02.963062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.135 [2024-12-06 20:39:02.963115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.135 [2024-12-06 20:39:02.963131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.135 [2024-12-06 20:39:02.963160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.135 [2024-12-06 20:39:02.963170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.135 [2024-12-06 20:39:02.963183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.135 [2024-12-06 20:39:02.963194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.135 [2024-12-06 20:39:02.963210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.135 [2024-12-06 20:39:02.963218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.135 [2024-12-06 20:39:02.963230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.135 [2024-12-06 20:39:02.963239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.135 [2024-12-06 20:39:02.963250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.396 20:39:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.396 20:39:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.396 20:39:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:46.396 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.656 [2024-12-06 20:39:03.561556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:46.656 [2024-12-06 20:39:03.563911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.656 [2024-12-06 20:39:03.563966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.656 [2024-12-06 20:39:03.563988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.656 [2024-12-06 20:39:03.564019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.656 [2024-12-06 20:39:03.564032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.656 [2024-12-06 20:39:03.564042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.656 [2024-12-06 20:39:03.564056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.656 [2024-12-06 20:39:03.564066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.656 [2024-12-06 20:39:03.564077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.656 [2024-12-06 20:39:03.564091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.656 [2024-12-06 20:39:03.564110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.656 [2024-12-06 20:39:03.564119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.916 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:46.916 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.916 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.916 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.916 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.916 20:39:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.916 20:39:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.916 20:39:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.916 20:39:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.916 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:46.916 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.174 20:39:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.76 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.76 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.76 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.76 2 00:11:59.455 remove_attach_helper took 45.76s to complete (handling 2 nvme drive(s)) 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:59.455 20:39:16 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67210 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67210 ']' 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67210 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67210 00:11:59.455 killing process with pid 67210 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67210' 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67210 00:11:59.455 20:39:16 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67210 00:12:00.874 20:39:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:00.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:01.134 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:01.134 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:01.396 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:01.396 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:01.396 00:12:01.396 real 2m29.569s 00:12:01.396 user 1m52.228s 00:12:01.396 sys 0m16.098s 00:12:01.396 20:39:18 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.396 20:39:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.396 ************************************ 00:12:01.396 END TEST sw_hotplug 00:12:01.396 ************************************ 00:12:01.396 20:39:18 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:01.396 20:39:18 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:01.396 20:39:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.396 20:39:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.396 20:39:18 -- common/autotest_common.sh@10 -- # set +x 00:12:01.396 ************************************ 00:12:01.396 START TEST nvme_xnvme 00:12:01.396 ************************************ 00:12:01.396 20:39:18 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:01.396 * Looking for test storage... 00:12:01.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:01.396 20:39:18 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.396 20:39:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.396 20:39:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.658 20:39:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.658 --rc genhtml_branch_coverage=1 00:12:01.658 --rc genhtml_function_coverage=1 00:12:01.658 --rc genhtml_legend=1 00:12:01.658 --rc geninfo_all_blocks=1 00:12:01.658 --rc geninfo_unexecuted_blocks=1 00:12:01.658 00:12:01.658 ' 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.658 --rc genhtml_branch_coverage=1 00:12:01.658 --rc genhtml_function_coverage=1 00:12:01.658 --rc genhtml_legend=1 00:12:01.658 --rc geninfo_all_blocks=1 00:12:01.658 --rc geninfo_unexecuted_blocks=1 00:12:01.658 00:12:01.658 ' 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.658 --rc genhtml_branch_coverage=1 00:12:01.658 --rc genhtml_function_coverage=1 00:12:01.658 --rc genhtml_legend=1 00:12:01.658 --rc geninfo_all_blocks=1 00:12:01.658 --rc geninfo_unexecuted_blocks=1 00:12:01.658 00:12:01.658 ' 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.658 --rc genhtml_branch_coverage=1 00:12:01.658 --rc genhtml_function_coverage=1 00:12:01.658 --rc genhtml_legend=1 00:12:01.658 --rc geninfo_all_blocks=1 00:12:01.658 --rc geninfo_unexecuted_blocks=1 00:12:01.658 00:12:01.658 ' 00:12:01.658 20:39:18 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:01.658 20:39:18 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:01.658 20:39:18 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:01.658 20:39:18 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:01.658 20:39:18 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:01.658 20:39:18 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:01.658 20:39:18 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:01.658 20:39:18 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:01.659 20:39:18 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:01.659 20:39:18 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:01.659 20:39:18 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:01.659 #define SPDK_CONFIG_H 00:12:01.659 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:01.659 #define SPDK_CONFIG_APPS 1 00:12:01.659 #define SPDK_CONFIG_ARCH native 00:12:01.659 #define SPDK_CONFIG_ASAN 1 00:12:01.659 #undef SPDK_CONFIG_AVAHI 00:12:01.659 #undef SPDK_CONFIG_CET 00:12:01.659 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:01.659 #define SPDK_CONFIG_COVERAGE 1 00:12:01.659 #define SPDK_CONFIG_CROSS_PREFIX 00:12:01.659 #undef SPDK_CONFIG_CRYPTO 00:12:01.659 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:01.659 #undef SPDK_CONFIG_CUSTOMOCF 00:12:01.659 #undef SPDK_CONFIG_DAOS 00:12:01.659 #define SPDK_CONFIG_DAOS_DIR 00:12:01.659 #define SPDK_CONFIG_DEBUG 1 00:12:01.659 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:01.659 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:01.659 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:01.659 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:01.659 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:01.659 #undef SPDK_CONFIG_DPDK_UADK 00:12:01.659 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:01.659 #define SPDK_CONFIG_EXAMPLES 1 00:12:01.659 #undef SPDK_CONFIG_FC 00:12:01.659 #define SPDK_CONFIG_FC_PATH 00:12:01.659 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:01.659 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:01.659 #define SPDK_CONFIG_FSDEV 1 00:12:01.659 #undef SPDK_CONFIG_FUSE 00:12:01.659 #undef SPDK_CONFIG_FUZZER 00:12:01.659 #define SPDK_CONFIG_FUZZER_LIB 00:12:01.659 #undef SPDK_CONFIG_GOLANG 00:12:01.659 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:01.659 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:01.659 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:01.659 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:01.659 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:01.659 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:01.659 #undef SPDK_CONFIG_HAVE_LZ4 00:12:01.659 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:01.659 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:01.659 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:01.660 #define SPDK_CONFIG_IDXD 1 00:12:01.660 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:01.660 #undef SPDK_CONFIG_IPSEC_MB 00:12:01.660 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:01.660 #define SPDK_CONFIG_ISAL 1 00:12:01.660 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:01.660 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:01.660 #define SPDK_CONFIG_LIBDIR 00:12:01.660 #undef SPDK_CONFIG_LTO 00:12:01.660 #define SPDK_CONFIG_MAX_LCORES 128 00:12:01.660 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:01.660 #define SPDK_CONFIG_NVME_CUSE 1 00:12:01.660 #undef SPDK_CONFIG_OCF 00:12:01.660 #define SPDK_CONFIG_OCF_PATH 00:12:01.660 #define SPDK_CONFIG_OPENSSL_PATH 00:12:01.660 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:01.660 #define SPDK_CONFIG_PGO_DIR 00:12:01.660 #undef SPDK_CONFIG_PGO_USE 00:12:01.660 #define SPDK_CONFIG_PREFIX /usr/local 00:12:01.660 #undef SPDK_CONFIG_RAID5F 00:12:01.660 #undef SPDK_CONFIG_RBD 00:12:01.660 #define SPDK_CONFIG_RDMA 1 00:12:01.660 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:01.660 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:01.660 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:01.660 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:01.660 #define SPDK_CONFIG_SHARED 1 00:12:01.660 #undef SPDK_CONFIG_SMA 00:12:01.660 #define SPDK_CONFIG_TESTS 1 00:12:01.660 #undef SPDK_CONFIG_TSAN 00:12:01.660 #define SPDK_CONFIG_UBLK 1 00:12:01.660 #define SPDK_CONFIG_UBSAN 1 00:12:01.660 #undef SPDK_CONFIG_UNIT_TESTS 00:12:01.660 #undef SPDK_CONFIG_URING 00:12:01.660 #define SPDK_CONFIG_URING_PATH 00:12:01.660 #undef SPDK_CONFIG_URING_ZNS 00:12:01.660 #undef SPDK_CONFIG_USDT 00:12:01.660 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:01.660 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:01.660 #undef SPDK_CONFIG_VFIO_USER 00:12:01.660 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:01.660 #define SPDK_CONFIG_VHOST 1 00:12:01.660 #define SPDK_CONFIG_VIRTIO 1 00:12:01.660 #undef SPDK_CONFIG_VTUNE 00:12:01.660 #define SPDK_CONFIG_VTUNE_DIR 00:12:01.660 #define SPDK_CONFIG_WERROR 1 00:12:01.660 #define SPDK_CONFIG_WPDK_DIR 00:12:01.660 #define SPDK_CONFIG_XNVME 1 00:12:01.660 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:01.660 20:39:18 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.660 20:39:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.660 20:39:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.660 20:39:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.660 20:39:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.660 20:39:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.660 20:39:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.660 20:39:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.660 20:39:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:01.660 20:39:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:01.660 20:39:18 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:01.660 20:39:18 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:01.661 20:39:18 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68561 ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68561 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tEVOc5 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.tEVOc5/tests/xnvme /tmp/spdk.tEVOc5 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13956317184 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5611700224 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13956317184 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5611700224 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95702626304 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4000153600 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:01.662 * Looking for test storage... 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13956317184 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:01.662 20:39:18 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:01.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.663 --rc genhtml_branch_coverage=1 00:12:01.663 --rc genhtml_function_coverage=1 00:12:01.663 --rc genhtml_legend=1 00:12:01.663 --rc geninfo_all_blocks=1 00:12:01.663 --rc geninfo_unexecuted_blocks=1 00:12:01.663 00:12:01.663 ' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.663 --rc genhtml_branch_coverage=1 00:12:01.663 --rc genhtml_function_coverage=1 00:12:01.663 --rc genhtml_legend=1 00:12:01.663 --rc geninfo_all_blocks=1 00:12:01.663 --rc geninfo_unexecuted_blocks=1 00:12:01.663 00:12:01.663 ' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.663 --rc genhtml_branch_coverage=1 00:12:01.663 --rc genhtml_function_coverage=1 00:12:01.663 --rc genhtml_legend=1 00:12:01.663 --rc geninfo_all_blocks=1 00:12:01.663 --rc geninfo_unexecuted_blocks=1 00:12:01.663 00:12:01.663 ' 00:12:01.663 20:39:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.663 --rc genhtml_branch_coverage=1 00:12:01.663 --rc genhtml_function_coverage=1 00:12:01.663 --rc genhtml_legend=1 00:12:01.663 --rc geninfo_all_blocks=1 00:12:01.663 --rc geninfo_unexecuted_blocks=1 00:12:01.663 00:12:01.663 ' 00:12:01.663 20:39:18 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.663 20:39:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.663 20:39:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.663 20:39:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.663 20:39:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.663 20:39:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:01.663 20:39:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:01.663 20:39:18 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:01.664 20:39:18 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:01.664 20:39:18 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:01.664 20:39:18 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:01.664 20:39:18 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:01.664 20:39:18 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:01.664 20:39:18 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:01.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:02.180 Waiting for block devices as requested 00:12:02.180 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.180 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.439 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.439 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.702 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:07.702 20:39:24 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:07.702 20:39:24 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:07.702 20:39:24 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:07.960 20:39:24 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:07.960 20:39:24 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:07.960 No valid GPT data, bailing 00:12:07.960 20:39:24 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:07.960 20:39:24 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:07.960 20:39:24 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:07.960 20:39:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:07.960 20:39:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.960 20:39:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.960 20:39:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.960 ************************************ 00:12:07.960 START TEST xnvme_rpc 00:12:07.960 ************************************ 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:07.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68950 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68950 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68950 ']' 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.960 20:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:07.960 [2024-12-06 20:39:25.065146] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:12:07.960 [2024-12-06 20:39:25.065606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68950 ] 00:12:08.216 [2024-12-06 20:39:25.226261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.216 [2024-12-06 20:39:25.323253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.781 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.781 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:08.781 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:08.781 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.781 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.038 xnvme_bdev 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:09.038 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:09.039 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:09.039 20:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:09.039 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.039 20:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68950 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68950 ']' 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68950 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68950 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.039 killing process with pid 68950 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68950' 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68950 00:12:09.039 20:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68950 00:12:10.947 00:12:10.947 real 0m2.673s 00:12:10.947 user 0m2.734s 00:12:10.947 sys 0m0.364s 00:12:10.947 20:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.947 20:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.947 ************************************ 00:12:10.947 END TEST xnvme_rpc 00:12:10.947 ************************************ 00:12:10.947 20:39:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:10.947 20:39:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.947 20:39:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.947 20:39:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.947 ************************************ 00:12:10.947 START TEST xnvme_bdevperf 00:12:10.947 ************************************ 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:10.947 20:39:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:10.947 { 00:12:10.947 "subsystems": [ 00:12:10.947 { 00:12:10.947 "subsystem": "bdev", 00:12:10.947 "config": [ 00:12:10.947 { 00:12:10.947 "params": { 00:12:10.947 "io_mechanism": "libaio", 00:12:10.947 "conserve_cpu": false, 00:12:10.947 "filename": "/dev/nvme0n1", 00:12:10.947 "name": "xnvme_bdev" 00:12:10.947 }, 00:12:10.947 "method": "bdev_xnvme_create" 00:12:10.947 }, 00:12:10.947 { 00:12:10.947 "method": "bdev_wait_for_examine" 00:12:10.947 } 00:12:10.947 ] 00:12:10.947 } 00:12:10.947 ] 00:12:10.947 } 00:12:10.947 [2024-12-06 20:39:27.786838] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:12:10.947 [2024-12-06 20:39:27.786986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69018 ] 00:12:10.947 [2024-12-06 20:39:27.950471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.947 [2024-12-06 20:39:28.072816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.514 Running I/O for 5 seconds... 00:12:13.390 30697.00 IOPS, 119.91 MiB/s [2024-12-06T20:39:31.468Z] 29522.50 IOPS, 115.32 MiB/s [2024-12-06T20:39:32.411Z] 29524.67 IOPS, 115.33 MiB/s [2024-12-06T20:39:33.799Z] 28739.50 IOPS, 112.26 MiB/s 00:12:16.666 Latency(us) 00:12:16.666 [2024-12-06T20:39:33.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.666 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:16.666 xnvme_bdev : 5.00 28558.52 111.56 0.00 0.00 2236.36 215.83 6856.07 00:12:16.666 [2024-12-06T20:39:33.799Z] =================================================================================================================== 00:12:16.666 [2024-12-06T20:39:33.799Z] Total : 28558.52 111.56 0.00 0.00 2236.36 215.83 6856.07 00:12:16.928 20:39:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:16.928 20:39:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:16.928 20:39:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:16.928 20:39:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:16.928 20:39:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:16.928 { 00:12:16.928 "subsystems": [ 00:12:16.928 { 00:12:16.928 "subsystem": "bdev", 00:12:16.928 "config": [ 00:12:16.928 { 00:12:16.928 "params": { 00:12:16.928 "io_mechanism": "libaio", 00:12:16.928 "conserve_cpu": false, 00:12:16.928 "filename": "/dev/nvme0n1", 00:12:16.928 "name": "xnvme_bdev" 00:12:16.928 }, 00:12:16.928 "method": "bdev_xnvme_create" 00:12:16.928 }, 00:12:16.928 { 00:12:16.928 "method": "bdev_wait_for_examine" 00:12:16.928 } 00:12:16.928 ] 00:12:16.928 } 00:12:16.928 ] 00:12:16.928 } 00:12:16.928 [2024-12-06 20:39:34.044214] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:12:16.928 [2024-12-06 20:39:34.044333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69088 ] 00:12:17.189 [2024-12-06 20:39:34.201916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.189 [2024-12-06 20:39:34.285883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.450 Running I/O for 5 seconds... 00:12:19.402 37038.00 IOPS, 144.68 MiB/s [2024-12-06T20:39:37.921Z] 34999.50 IOPS, 136.72 MiB/s [2024-12-06T20:39:38.865Z] 33450.33 IOPS, 130.67 MiB/s [2024-12-06T20:39:39.811Z] 32148.75 IOPS, 125.58 MiB/s [2024-12-06T20:39:39.811Z] 31517.00 IOPS, 123.11 MiB/s 00:12:22.678 Latency(us) 00:12:22.678 [2024-12-06T20:39:39.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.678 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:22.678 xnvme_bdev : 5.01 31473.00 122.94 0.00 0.00 2027.38 43.91 19862.45 00:12:22.678 [2024-12-06T20:39:39.811Z] =================================================================================================================== 00:12:22.678 [2024-12-06T20:39:39.811Z] Total : 31473.00 122.94 0.00 0.00 2027.38 43.91 19862.45 00:12:23.253 00:12:23.253 real 0m12.596s 00:12:23.253 user 0m5.726s 00:12:23.253 sys 0m5.343s 00:12:23.253 20:39:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.253 ************************************ 00:12:23.253 END TEST xnvme_bdevperf 00:12:23.253 ************************************ 00:12:23.253 20:39:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:23.253 20:39:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:23.253 20:39:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.253 20:39:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.253 20:39:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.253 ************************************ 00:12:23.253 START TEST xnvme_fio_plugin 00:12:23.253 ************************************ 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:23.253 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:23.514 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:23.514 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:23.514 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:23.514 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:23.514 20:39:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.514 { 00:12:23.514 "subsystems": [ 00:12:23.514 { 00:12:23.514 "subsystem": "bdev", 00:12:23.514 "config": [ 00:12:23.514 { 00:12:23.514 "params": { 00:12:23.514 "io_mechanism": "libaio", 00:12:23.514 "conserve_cpu": false, 00:12:23.514 "filename": "/dev/nvme0n1", 00:12:23.514 "name": "xnvme_bdev" 00:12:23.514 }, 00:12:23.514 "method": "bdev_xnvme_create" 00:12:23.514 }, 00:12:23.514 { 00:12:23.514 "method": "bdev_wait_for_examine" 00:12:23.514 } 00:12:23.514 ] 00:12:23.514 } 00:12:23.514 ] 00:12:23.514 } 00:12:23.514 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:23.514 fio-3.35 00:12:23.514 Starting 1 thread 00:12:30.134 00:12:30.134 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69208: Fri Dec 6 20:39:46 2024 00:12:30.134 read: IOPS=36.4k, BW=142MiB/s (149MB/s)(711MiB/5002msec) 00:12:30.134 slat (usec): min=4, max=1949, avg=19.57, stdev=82.47 00:12:30.134 clat (usec): min=104, max=12589, avg=1231.04, stdev=520.38 00:12:30.134 lat (usec): min=172, max=12594, avg=1250.61, stdev=514.77 00:12:30.134 clat percentiles (usec): 00:12:30.134 | 1.00th=[ 265], 5.00th=[ 482], 10.00th=[ 627], 20.00th=[ 816], 00:12:30.134 | 30.00th=[ 947], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1303], 00:12:30.134 | 70.00th=[ 1434], 80.00th=[ 1598], 90.00th=[ 1876], 95.00th=[ 2147], 00:12:30.134 | 99.00th=[ 2835], 99.50th=[ 3097], 99.90th=[ 3654], 99.95th=[ 3982], 00:12:30.134 | 99.99th=[ 5014] 00:12:30.134 bw ( KiB/s): min=134896, max=154272, per=100.00%, avg=146031.11, stdev=6968.65, samples=9 00:12:30.134 iops : min=33724, max=38568, avg=36507.78, stdev=1742.16, samples=9 00:12:30.134 lat (usec) : 250=0.83%, 500=4.67%, 750=10.66%, 1000=18.32% 00:12:30.134 lat (msec) : 2=58.37%, 4=7.11%, 10=0.05%, 20=0.01% 00:12:30.134 cpu : usr=43.15%, sys=47.73%, ctx=26, majf=0, minf=764 00:12:30.134 IO depths : 1=0.3%, 2=1.2%, 4=3.2%, 8=8.7%, 16=23.3%, 32=61.1%, >=64=2.1% 00:12:30.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.134 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:30.134 issued rwts: total=182078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.134 00:12:30.134 Run status group 0 (all jobs): 00:12:30.134 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=711MiB (746MB), run=5002-5002msec 00:12:30.393 ----------------------------------------------------- 00:12:30.393 Suppressions used: 00:12:30.393 count bytes template 00:12:30.393 1 11 /usr/src/fio/parse.c 00:12:30.393 1 8 libtcmalloc_minimal.so 00:12:30.393 1 904 libcrypto.so 00:12:30.393 ----------------------------------------------------- 00:12:30.393 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:30.393 20:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:30.393 { 00:12:30.393 "subsystems": [ 00:12:30.393 { 00:12:30.393 "subsystem": "bdev", 00:12:30.393 "config": [ 00:12:30.393 { 00:12:30.393 "params": { 00:12:30.393 "io_mechanism": "libaio", 00:12:30.393 "conserve_cpu": false, 00:12:30.393 "filename": "/dev/nvme0n1", 00:12:30.393 "name": "xnvme_bdev" 00:12:30.393 }, 00:12:30.393 "method": "bdev_xnvme_create" 00:12:30.393 }, 00:12:30.393 { 00:12:30.393 "method": "bdev_wait_for_examine" 00:12:30.393 } 00:12:30.393 ] 00:12:30.393 } 00:12:30.393 ] 00:12:30.393 } 00:12:30.393 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:30.393 fio-3.35 00:12:30.393 Starting 1 thread 00:12:36.971 00:12:36.971 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69300: Fri Dec 6 20:39:53 2024 00:12:36.971 write: IOPS=37.3k, BW=146MiB/s (153MB/s)(729MiB/5006msec); 0 zone resets 00:12:36.971 slat (usec): min=4, max=1837, avg=18.16, stdev=69.09 00:12:36.971 clat (usec): min=9, max=13159, avg=1263.05, stdev=1223.17 00:12:36.971 lat (usec): min=73, max=13163, avg=1281.21, stdev=1221.15 00:12:36.971 clat percentiles (usec): 00:12:36.971 | 1.00th=[ 208], 5.00th=[ 338], 10.00th=[ 453], 20.00th=[ 635], 00:12:36.971 | 30.00th=[ 766], 40.00th=[ 881], 50.00th=[ 996], 60.00th=[ 1123], 00:12:36.971 | 70.00th=[ 1287], 80.00th=[ 1500], 90.00th=[ 1942], 95.00th=[ 2835], 00:12:36.971 | 99.00th=[ 7701], 99.50th=[ 8848], 99.90th=[10552], 99.95th=[10945], 00:12:36.971 | 99.99th=[11994] 00:12:36.971 bw ( KiB/s): min=124495, max=173232, per=99.93%, avg=149018.56, stdev=17127.04, samples=9 00:12:36.971 iops : min=31123, max=43308, avg=37254.56, stdev=4281.90, samples=9 00:12:36.971 lat (usec) : 10=0.01%, 20=0.01%, 50=0.02%, 100=0.07%, 250=1.84% 00:12:36.971 lat (usec) : 500=10.33%, 750=16.55%, 1000=21.95% 00:12:36.971 lat (msec) : 2=39.96%, 4=5.80%, 10=3.27%, 20=0.21% 00:12:36.971 cpu : usr=43.58%, sys=44.48%, ctx=8, majf=0, minf=765 00:12:36.971 IO depths : 1=0.2%, 2=0.6%, 4=2.1%, 8=6.8%, 16=20.8%, 32=66.7%, >=64=2.8% 00:12:36.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.971 complete : 0=0.0%, 4=97.6%, 8=0.1%, 16=0.2%, 32=0.5%, 64=1.6%, >=64=0.0% 00:12:36.971 issued rwts: total=0,186628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.971 00:12:36.971 Run status group 0 (all jobs): 00:12:36.971 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=729MiB (764MB), run=5006-5006msec 00:12:37.233 ----------------------------------------------------- 00:12:37.233 Suppressions used: 00:12:37.233 count bytes template 00:12:37.233 1 11 /usr/src/fio/parse.c 00:12:37.233 1 8 libtcmalloc_minimal.so 00:12:37.233 1 904 libcrypto.so 00:12:37.233 ----------------------------------------------------- 00:12:37.233 00:12:37.233 00:12:37.233 real 0m13.784s 00:12:37.233 user 0m7.086s 00:12:37.233 sys 0m5.255s 00:12:37.233 20:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.233 ************************************ 00:12:37.233 END TEST xnvme_fio_plugin 00:12:37.233 ************************************ 00:12:37.233 20:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:37.233 20:39:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:37.233 20:39:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:37.233 20:39:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:37.233 20:39:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:37.233 20:39:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.233 20:39:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.233 20:39:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.233 ************************************ 00:12:37.233 START TEST xnvme_rpc 00:12:37.233 ************************************ 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69386 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69386 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69386 ']' 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.233 20:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:37.233 [2024-12-06 20:39:54.321129] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:12:37.233 [2024-12-06 20:39:54.321282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69386 ] 00:12:37.495 [2024-12-06 20:39:54.485430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.495 [2024-12-06 20:39:54.618338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 xnvme_bdev 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.444 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69386 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69386 ']' 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69386 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69386 00:12:38.445 killing process with pid 69386 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69386' 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69386 00:12:38.445 20:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69386 00:12:40.362 00:12:40.362 real 0m2.894s 00:12:40.362 user 0m2.882s 00:12:40.362 sys 0m0.472s 00:12:40.362 20:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.362 ************************************ 00:12:40.362 END TEST xnvme_rpc 00:12:40.362 20:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.362 ************************************ 00:12:40.362 20:39:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:40.362 20:39:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.362 20:39:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.362 20:39:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.362 ************************************ 00:12:40.362 START TEST xnvme_bdevperf 00:12:40.362 ************************************ 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:40.362 20:39:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:40.362 { 00:12:40.362 "subsystems": [ 00:12:40.362 { 00:12:40.362 "subsystem": "bdev", 00:12:40.362 "config": [ 00:12:40.362 { 00:12:40.362 "params": { 00:12:40.362 "io_mechanism": "libaio", 00:12:40.362 "conserve_cpu": true, 00:12:40.362 "filename": "/dev/nvme0n1", 00:12:40.362 "name": "xnvme_bdev" 00:12:40.362 }, 00:12:40.362 "method": "bdev_xnvme_create" 00:12:40.362 }, 00:12:40.362 { 00:12:40.362 "method": "bdev_wait_for_examine" 00:12:40.362 } 00:12:40.362 ] 00:12:40.362 } 00:12:40.362 ] 00:12:40.362 } 00:12:40.362 [2024-12-06 20:39:57.266023] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:12:40.362 [2024-12-06 20:39:57.266333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69460 ] 00:12:40.362 [2024-12-06 20:39:57.431083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.623 [2024-12-06 20:39:57.553434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.883 Running I/O for 5 seconds... 00:12:42.752 38050.00 IOPS, 148.63 MiB/s [2024-12-06T20:40:01.259Z] 38011.00 IOPS, 148.48 MiB/s [2024-12-06T20:40:01.908Z] 38477.33 IOPS, 150.30 MiB/s [2024-12-06T20:40:03.292Z] 38229.25 IOPS, 149.33 MiB/s 00:12:46.159 Latency(us) 00:12:46.159 [2024-12-06T20:40:03.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.159 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:46.159 xnvme_bdev : 5.00 38252.84 149.43 0.00 0.00 1668.53 230.01 9628.75 00:12:46.159 [2024-12-06T20:40:03.292Z] =================================================================================================================== 00:12:46.159 [2024-12-06T20:40:03.292Z] Total : 38252.84 149.43 0.00 0.00 1668.53 230.01 9628.75 00:12:46.725 20:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.725 20:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:46.725 20:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:46.725 20:40:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:46.725 20:40:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.725 { 00:12:46.725 "subsystems": [ 00:12:46.725 { 00:12:46.725 "subsystem": "bdev", 00:12:46.725 "config": [ 00:12:46.725 { 00:12:46.725 "params": { 00:12:46.725 "io_mechanism": "libaio", 00:12:46.725 "conserve_cpu": true, 00:12:46.725 "filename": "/dev/nvme0n1", 00:12:46.725 "name": "xnvme_bdev" 00:12:46.725 }, 00:12:46.725 "method": "bdev_xnvme_create" 00:12:46.725 }, 00:12:46.725 { 00:12:46.725 "method": "bdev_wait_for_examine" 00:12:46.725 } 00:12:46.725 ] 00:12:46.725 } 00:12:46.725 ] 00:12:46.725 } 00:12:46.725 [2024-12-06 20:40:03.754232] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:12:46.725 [2024-12-06 20:40:03.754611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69535 ] 00:12:46.984 [2024-12-06 20:40:03.914929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.984 [2024-12-06 20:40:04.047338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.243 Running I/O for 5 seconds... 00:12:49.562 32716.00 IOPS, 127.80 MiB/s [2024-12-06T20:40:07.631Z] 31952.00 IOPS, 124.81 MiB/s [2024-12-06T20:40:08.569Z] 32506.00 IOPS, 126.98 MiB/s [2024-12-06T20:40:09.516Z] 32875.00 IOPS, 128.42 MiB/s [2024-12-06T20:40:09.516Z] 33008.80 IOPS, 128.94 MiB/s 00:12:52.383 Latency(us) 00:12:52.383 [2024-12-06T20:40:09.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.384 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:52.384 xnvme_bdev : 5.00 32994.60 128.89 0.00 0.00 1935.36 45.10 18955.03 00:12:52.384 [2024-12-06T20:40:09.517Z] =================================================================================================================== 00:12:52.384 [2024-12-06T20:40:09.517Z] Total : 32994.60 128.89 0.00 0.00 1935.36 45.10 18955.03 00:12:53.029 00:12:53.029 real 0m12.916s 00:12:53.029 user 0m5.665s 00:12:53.029 sys 0m5.486s 00:12:53.029 20:40:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.029 20:40:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:53.029 ************************************ 00:12:53.029 END TEST xnvme_bdevperf 00:12:53.029 ************************************ 00:12:53.029 20:40:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:53.029 20:40:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.029 20:40:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.029 20:40:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.286 ************************************ 00:12:53.286 START TEST xnvme_fio_plugin 00:12:53.286 ************************************ 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:53.286 20:40:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:53.286 { 00:12:53.286 "subsystems": [ 00:12:53.286 { 00:12:53.286 "subsystem": "bdev", 00:12:53.286 "config": [ 00:12:53.286 { 00:12:53.286 "params": { 00:12:53.286 "io_mechanism": "libaio", 00:12:53.286 "conserve_cpu": true, 00:12:53.286 "filename": "/dev/nvme0n1", 00:12:53.286 "name": "xnvme_bdev" 00:12:53.286 }, 00:12:53.286 "method": "bdev_xnvme_create" 00:12:53.286 }, 00:12:53.286 { 00:12:53.286 "method": "bdev_wait_for_examine" 00:12:53.286 } 00:12:53.286 ] 00:12:53.286 } 00:12:53.286 ] 00:12:53.286 } 00:12:53.286 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:53.286 fio-3.35 00:12:53.286 Starting 1 thread 00:12:59.842 00:12:59.842 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69649: Fri Dec 6 20:40:16 2024 00:12:59.842 read: IOPS=36.9k, BW=144MiB/s (151MB/s)(722MiB/5001msec) 00:12:59.842 slat (usec): min=4, max=1846, avg=19.86, stdev=82.03 00:12:59.842 clat (usec): min=106, max=4592, avg=1193.41, stdev=507.80 00:12:59.842 lat (usec): min=170, max=4643, avg=1213.27, stdev=502.18 00:12:59.842 clat percentiles (usec): 00:12:59.842 | 1.00th=[ 245], 5.00th=[ 441], 10.00th=[ 586], 20.00th=[ 775], 00:12:59.842 | 30.00th=[ 922], 40.00th=[ 1045], 50.00th=[ 1156], 60.00th=[ 1270], 00:12:59.842 | 70.00th=[ 1401], 80.00th=[ 1565], 90.00th=[ 1811], 95.00th=[ 2057], 00:12:59.842 | 99.00th=[ 2769], 99.50th=[ 3032], 99.90th=[ 3556], 99.95th=[ 3720], 00:12:59.842 | 99.99th=[ 4228] 00:12:59.842 bw ( KiB/s): min=138448, max=166592, per=100.00%, avg=148551.11, stdev=8833.48, samples=9 00:12:59.842 iops : min=34612, max=41648, avg=37137.78, stdev=2208.37, samples=9 00:12:59.842 lat (usec) : 250=1.10%, 500=5.52%, 750=12.13%, 1000=17.72% 00:12:59.842 lat (msec) : 2=57.55%, 4=5.97%, 10=0.02% 00:12:59.842 cpu : usr=40.58%, sys=51.32%, ctx=11, majf=0, minf=764 00:12:59.842 IO depths : 1=0.5%, 2=1.2%, 4=3.2%, 8=8.8%, 16=23.6%, 32=60.8%, >=64=2.0% 00:12:59.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.842 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:59.842 issued rwts: total=184780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:59.842 00:12:59.842 Run status group 0 (all jobs): 00:12:59.842 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=722MiB (757MB), run=5001-5001msec 00:12:59.842 ----------------------------------------------------- 00:12:59.842 Suppressions used: 00:12:59.842 count bytes template 00:12:59.842 1 11 /usr/src/fio/parse.c 00:12:59.842 1 8 libtcmalloc_minimal.so 00:12:59.842 1 904 libcrypto.so 00:12:59.842 ----------------------------------------------------- 00:12:59.842 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:00.101 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:00.102 20:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.102 { 00:13:00.102 "subsystems": [ 00:13:00.102 { 00:13:00.102 "subsystem": "bdev", 00:13:00.102 "config": [ 00:13:00.102 { 00:13:00.102 "params": { 00:13:00.102 "io_mechanism": "libaio", 00:13:00.102 "conserve_cpu": true, 00:13:00.102 "filename": "/dev/nvme0n1", 00:13:00.102 "name": "xnvme_bdev" 00:13:00.102 }, 00:13:00.102 "method": "bdev_xnvme_create" 00:13:00.102 }, 00:13:00.102 { 00:13:00.102 "method": "bdev_wait_for_examine" 00:13:00.102 } 00:13:00.102 ] 00:13:00.102 } 00:13:00.102 ] 00:13:00.102 } 00:13:00.102 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:00.102 fio-3.35 00:13:00.102 Starting 1 thread 00:13:06.665 00:13:06.665 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69745: Fri Dec 6 20:40:22 2024 00:13:06.665 write: IOPS=39.2k, BW=153MiB/s (160MB/s)(765MiB/5002msec); 0 zone resets 00:13:06.665 slat (usec): min=4, max=1876, avg=20.08, stdev=73.31 00:13:06.665 clat (usec): min=16, max=9730, avg=1097.13, stdev=625.09 00:13:06.665 lat (usec): min=62, max=9762, avg=1117.21, stdev=622.16 00:13:06.665 clat percentiles (usec): 00:13:06.665 | 1.00th=[ 215], 5.00th=[ 334], 10.00th=[ 465], 20.00th=[ 635], 00:13:06.665 | 30.00th=[ 766], 40.00th=[ 898], 50.00th=[ 1012], 60.00th=[ 1139], 00:13:06.665 | 70.00th=[ 1287], 80.00th=[ 1467], 90.00th=[ 1762], 95.00th=[ 2073], 00:13:06.666 | 99.00th=[ 2999], 99.50th=[ 3556], 99.90th=[ 7308], 99.95th=[ 8029], 00:13:06.666 | 99.99th=[ 8979] 00:13:06.666 bw ( KiB/s): min=144104, max=162080, per=99.55%, avg=155952.89, stdev=5542.69, samples=9 00:13:06.666 iops : min=36026, max=40520, avg=38988.22, stdev=1385.67, samples=9 00:13:06.666 lat (usec) : 20=0.01%, 50=0.01%, 100=0.03%, 250=1.83%, 500=9.82% 00:13:06.666 lat (usec) : 750=16.89%, 1000=20.53% 00:13:06.666 lat (msec) : 2=45.14%, 4=5.34%, 10=0.40% 00:13:06.666 cpu : usr=36.51%, sys=54.03%, ctx=11, majf=0, minf=765 00:13:06.666 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.6%, 16=23.9%, 32=61.4%, >=64=2.1% 00:13:06.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.666 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:06.666 issued rwts: total=0,195898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.666 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.666 00:13:06.666 Run status group 0 (all jobs): 00:13:06.666 WRITE: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=765MiB (802MB), run=5002-5002msec 00:13:06.666 ----------------------------------------------------- 00:13:06.666 Suppressions used: 00:13:06.666 count bytes template 00:13:06.666 1 11 /usr/src/fio/parse.c 00:13:06.666 1 8 libtcmalloc_minimal.so 00:13:06.666 1 904 libcrypto.so 00:13:06.666 ----------------------------------------------------- 00:13:06.666 00:13:06.666 00:13:06.666 real 0m13.564s 00:13:06.666 user 0m6.537s 00:13:06.666 sys 0m5.767s 00:13:06.666 20:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.666 ************************************ 00:13:06.666 END TEST xnvme_fio_plugin 00:13:06.666 ************************************ 00:13:06.666 20:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:06.666 20:40:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:06.666 20:40:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:06.666 20:40:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.666 20:40:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 ************************************ 00:13:06.666 START TEST xnvme_rpc 00:13:06.666 ************************************ 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:06.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69827 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69827 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69827 ']' 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 20:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.924 [2024-12-06 20:40:23.869385] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:13:06.924 [2024-12-06 20:40:23.870076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69827 ] 00:13:06.924 [2024-12-06 20:40:24.029416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.182 [2024-12-06 20:40:24.128870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.748 xnvme_bdev 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:07.748 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.007 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:08.007 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:08.007 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:08.007 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:08.007 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.007 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69827 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69827 ']' 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69827 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69827 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69827' 00:13:08.008 killing process with pid 69827 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69827 00:13:08.008 20:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69827 00:13:09.378 00:13:09.378 real 0m2.703s 00:13:09.378 user 0m2.847s 00:13:09.378 sys 0m0.381s 00:13:09.378 20:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.378 20:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.378 ************************************ 00:13:09.378 END TEST xnvme_rpc 00:13:09.378 ************************************ 00:13:09.634 20:40:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:09.634 20:40:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.634 20:40:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.634 20:40:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 ************************************ 00:13:09.634 START TEST xnvme_bdevperf 00:13:09.634 ************************************ 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:09.634 20:40:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 { 00:13:09.634 "subsystems": [ 00:13:09.634 { 00:13:09.634 "subsystem": "bdev", 00:13:09.634 "config": [ 00:13:09.634 { 00:13:09.634 "params": { 00:13:09.634 "io_mechanism": "io_uring", 00:13:09.634 "conserve_cpu": false, 00:13:09.634 "filename": "/dev/nvme0n1", 00:13:09.634 "name": "xnvme_bdev" 00:13:09.634 }, 00:13:09.634 "method": "bdev_xnvme_create" 00:13:09.634 }, 00:13:09.634 { 00:13:09.634 "method": "bdev_wait_for_examine" 00:13:09.634 } 00:13:09.634 ] 00:13:09.634 } 00:13:09.634 ] 00:13:09.634 } 00:13:09.634 [2024-12-06 20:40:26.619665] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:13:09.634 [2024-12-06 20:40:26.619922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69901 ] 00:13:09.890 [2024-12-06 20:40:26.782100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.891 [2024-12-06 20:40:26.882540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.150 Running I/O for 5 seconds... 00:13:12.009 42300.00 IOPS, 165.23 MiB/s [2024-12-06T20:40:30.515Z] 41437.00 IOPS, 161.86 MiB/s [2024-12-06T20:40:31.448Z] 40991.33 IOPS, 160.12 MiB/s [2024-12-06T20:40:32.380Z] 41530.75 IOPS, 162.23 MiB/s 00:13:15.247 Latency(us) 00:13:15.247 [2024-12-06T20:40:32.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.247 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:15.247 xnvme_bdev : 5.00 41166.75 160.81 0.00 0.00 1550.71 356.04 7763.50 00:13:15.247 [2024-12-06T20:40:32.380Z] =================================================================================================================== 00:13:15.247 [2024-12-06T20:40:32.380Z] Total : 41166.75 160.81 0.00 0.00 1550.71 356.04 7763.50 00:13:15.815 20:40:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:15.815 20:40:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:15.815 20:40:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:15.815 20:40:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:15.815 20:40:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:15.815 { 00:13:15.815 "subsystems": [ 00:13:15.815 { 00:13:15.815 "subsystem": "bdev", 00:13:15.815 "config": [ 00:13:15.815 { 00:13:15.815 "params": { 00:13:15.815 "io_mechanism": "io_uring", 00:13:15.815 "conserve_cpu": false, 00:13:15.815 "filename": "/dev/nvme0n1", 00:13:15.815 "name": "xnvme_bdev" 00:13:15.815 }, 00:13:15.815 "method": "bdev_xnvme_create" 00:13:15.815 }, 00:13:15.815 { 00:13:15.815 "method": "bdev_wait_for_examine" 00:13:15.815 } 00:13:15.815 ] 00:13:15.815 } 00:13:15.815 ] 00:13:15.815 } 00:13:15.815 [2024-12-06 20:40:32.920351] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:13:15.815 [2024-12-06 20:40:32.920458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69973 ] 00:13:16.073 [2024-12-06 20:40:33.081723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.074 [2024-12-06 20:40:33.177358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.354 Running I/O for 5 seconds... 00:13:18.304 38898.00 IOPS, 151.95 MiB/s [2024-12-06T20:40:36.810Z] 36823.00 IOPS, 143.84 MiB/s [2024-12-06T20:40:37.743Z] 36373.33 IOPS, 142.08 MiB/s [2024-12-06T20:40:38.677Z] 36347.50 IOPS, 141.98 MiB/s [2024-12-06T20:40:38.677Z] 36292.40 IOPS, 141.77 MiB/s 00:13:21.544 Latency(us) 00:13:21.544 [2024-12-06T20:40:38.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.544 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:21.544 xnvme_bdev : 5.00 36283.13 141.73 0.00 0.00 1759.88 95.31 14619.57 00:13:21.544 [2024-12-06T20:40:38.677Z] =================================================================================================================== 00:13:21.544 [2024-12-06T20:40:38.677Z] Total : 36283.13 141.73 0.00 0.00 1759.88 95.31 14619.57 00:13:22.111 00:13:22.111 real 0m12.593s 00:13:22.111 user 0m5.970s 00:13:22.111 sys 0m6.358s 00:13:22.111 20:40:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.111 20:40:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.111 ************************************ 00:13:22.111 END TEST xnvme_bdevperf 00:13:22.111 ************************************ 00:13:22.111 20:40:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:22.111 20:40:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.111 20:40:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.111 20:40:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.111 ************************************ 00:13:22.111 START TEST xnvme_fio_plugin 00:13:22.111 ************************************ 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.111 20:40:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.111 { 00:13:22.111 "subsystems": [ 00:13:22.111 { 00:13:22.111 "subsystem": "bdev", 00:13:22.111 "config": [ 00:13:22.111 { 00:13:22.111 "params": { 00:13:22.111 "io_mechanism": "io_uring", 00:13:22.111 "conserve_cpu": false, 00:13:22.111 "filename": "/dev/nvme0n1", 00:13:22.111 "name": "xnvme_bdev" 00:13:22.111 }, 00:13:22.111 "method": "bdev_xnvme_create" 00:13:22.111 }, 00:13:22.111 { 00:13:22.111 "method": "bdev_wait_for_examine" 00:13:22.111 } 00:13:22.111 ] 00:13:22.111 } 00:13:22.111 ] 00:13:22.111 } 00:13:22.370 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:22.370 fio-3.35 00:13:22.370 Starting 1 thread 00:13:28.945 00:13:28.945 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70092: Fri Dec 6 20:40:45 2024 00:13:28.945 read: IOPS=38.1k, BW=149MiB/s (156MB/s)(744MiB/5001msec) 00:13:28.945 slat (nsec): min=2850, max=69373, avg=3615.00, stdev=2003.03 00:13:28.945 clat (usec): min=486, max=3511, avg=1535.77, stdev=286.39 00:13:28.945 lat (usec): min=489, max=3580, avg=1539.39, stdev=286.74 00:13:28.945 clat percentiles (usec): 00:13:28.945 | 1.00th=[ 963], 5.00th=[ 1090], 10.00th=[ 1172], 20.00th=[ 1287], 00:13:28.945 | 30.00th=[ 1369], 40.00th=[ 1450], 50.00th=[ 1532], 60.00th=[ 1598], 00:13:28.945 | 70.00th=[ 1680], 80.00th=[ 1762], 90.00th=[ 1893], 95.00th=[ 2024], 00:13:28.945 | 99.00th=[ 2278], 99.50th=[ 2376], 99.90th=[ 2737], 99.95th=[ 2900], 00:13:28.945 | 99.99th=[ 3294] 00:13:28.945 bw ( KiB/s): min=145408, max=162816, per=100.00%, avg=152462.22, stdev=4828.69, samples=9 00:13:28.945 iops : min=36352, max=40704, avg=38115.56, stdev=1207.17, samples=9 00:13:28.945 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.64% 00:13:28.945 lat (msec) : 2=92.67%, 4=5.67% 00:13:28.945 cpu : usr=32.98%, sys=65.86%, ctx=12, majf=0, minf=762 00:13:28.945 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:28.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.945 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:28.945 issued rwts: total=190369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:28.945 00:13:28.945 Run status group 0 (all jobs): 00:13:28.945 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=744MiB (780MB), run=5001-5001msec 00:13:28.945 ----------------------------------------------------- 00:13:28.945 Suppressions used: 00:13:28.945 count bytes template 00:13:28.945 1 11 /usr/src/fio/parse.c 00:13:28.945 1 8 libtcmalloc_minimal.so 00:13:28.945 1 904 libcrypto.so 00:13:28.945 ----------------------------------------------------- 00:13:28.945 00:13:28.945 20:40:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:28.945 20:40:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.945 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.946 20:40:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.946 { 00:13:28.946 "subsystems": [ 00:13:28.946 { 00:13:28.946 "subsystem": "bdev", 00:13:28.946 "config": [ 00:13:28.946 { 00:13:28.946 "params": { 00:13:28.946 "io_mechanism": "io_uring", 00:13:28.946 "conserve_cpu": false, 00:13:28.946 "filename": "/dev/nvme0n1", 00:13:28.946 "name": "xnvme_bdev" 00:13:28.946 }, 00:13:28.946 "method": "bdev_xnvme_create" 00:13:28.946 }, 00:13:28.946 { 00:13:28.946 "method": "bdev_wait_for_examine" 00:13:28.946 } 00:13:28.946 ] 00:13:28.946 } 00:13:28.946 ] 00:13:28.946 } 00:13:29.207 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:29.207 fio-3.35 00:13:29.207 Starting 1 thread 00:13:35.763 00:13:35.763 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70178: Fri Dec 6 20:40:51 2024 00:13:35.763 write: IOPS=38.2k, BW=149MiB/s (157MB/s)(747MiB/5001msec); 0 zone resets 00:13:35.763 slat (nsec): min=2880, max=77917, avg=3826.55, stdev=2106.63 00:13:35.763 clat (usec): min=477, max=7345, avg=1521.45, stdev=278.47 00:13:35.763 lat (usec): min=480, max=7348, avg=1525.27, stdev=278.78 00:13:35.763 clat percentiles (usec): 00:13:35.763 | 1.00th=[ 988], 5.00th=[ 1106], 10.00th=[ 1172], 20.00th=[ 1287], 00:13:35.763 | 30.00th=[ 1369], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1582], 00:13:35.763 | 70.00th=[ 1647], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 1991], 00:13:35.763 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2737], 99.95th=[ 3032], 00:13:35.763 | 99.99th=[ 4555] 00:13:35.763 bw ( KiB/s): min=146040, max=160760, per=100.00%, avg=153457.78, stdev=5558.19, samples=9 00:13:35.763 iops : min=36510, max=40190, avg=38364.44, stdev=1389.55, samples=9 00:13:35.763 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.24% 00:13:35.763 lat (msec) : 2=94.15%, 4=4.58%, 10=0.02% 00:13:35.763 cpu : usr=34.52%, sys=64.32%, ctx=14, majf=0, minf=763 00:13:35.763 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:35.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.763 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:35.763 issued rwts: total=0,191117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:35.763 00:13:35.763 Run status group 0 (all jobs): 00:13:35.763 WRITE: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=747MiB (783MB), run=5001-5001msec 00:13:35.763 ----------------------------------------------------- 00:13:35.763 Suppressions used: 00:13:35.763 count bytes template 00:13:35.763 1 11 /usr/src/fio/parse.c 00:13:35.763 1 8 libtcmalloc_minimal.so 00:13:35.763 1 904 libcrypto.so 00:13:35.763 ----------------------------------------------------- 00:13:35.763 00:13:35.763 ************************************ 00:13:35.763 END TEST xnvme_fio_plugin 00:13:35.763 ************************************ 00:13:35.763 00:13:35.763 real 0m13.545s 00:13:35.763 user 0m6.091s 00:13:35.763 sys 0m7.010s 00:13:35.763 20:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.763 20:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 20:40:52 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:35.763 20:40:52 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:35.763 20:40:52 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:35.763 20:40:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:35.763 20:40:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.763 20:40:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.763 20:40:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 ************************************ 00:13:35.763 START TEST xnvme_rpc 00:13:35.763 ************************************ 00:13:35.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70264 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70264 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70264 ']' 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:35.763 20:40:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 [2024-12-06 20:40:52.889430] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:13:35.763 [2024-12-06 20:40:52.889549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70264 ] 00:13:36.021 [2024-12-06 20:40:53.045238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.021 [2024-12-06 20:40:53.140829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 xnvme_bdev 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70264 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70264 ']' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70264 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70264 00:13:36.954 killing process with pid 70264 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70264' 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70264 00:13:36.954 20:40:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70264 00:13:38.327 00:13:38.327 real 0m2.594s 00:13:38.327 user 0m2.706s 00:13:38.327 sys 0m0.348s 00:13:38.327 ************************************ 00:13:38.327 END TEST xnvme_rpc 00:13:38.327 ************************************ 00:13:38.327 20:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.327 20:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.327 20:40:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:38.327 20:40:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.327 20:40:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.327 20:40:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.327 ************************************ 00:13:38.327 START TEST xnvme_bdevperf 00:13:38.327 ************************************ 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:38.327 20:40:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.583 { 00:13:38.583 "subsystems": [ 00:13:38.583 { 00:13:38.583 "subsystem": "bdev", 00:13:38.583 "config": [ 00:13:38.583 { 00:13:38.583 "params": { 00:13:38.583 "io_mechanism": "io_uring", 00:13:38.583 "conserve_cpu": true, 00:13:38.583 "filename": "/dev/nvme0n1", 00:13:38.583 "name": "xnvme_bdev" 00:13:38.583 }, 00:13:38.583 "method": "bdev_xnvme_create" 00:13:38.583 }, 00:13:38.583 { 00:13:38.583 "method": "bdev_wait_for_examine" 00:13:38.583 } 00:13:38.583 ] 00:13:38.583 } 00:13:38.583 ] 00:13:38.583 } 00:13:38.583 [2024-12-06 20:40:55.511977] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:13:38.583 [2024-12-06 20:40:55.512191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70333 ] 00:13:38.583 [2024-12-06 20:40:55.671547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.840 [2024-12-06 20:40:55.764309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.098 Running I/O for 5 seconds... 00:13:40.962 62348.00 IOPS, 243.55 MiB/s [2024-12-06T20:40:59.062Z] 62647.00 IOPS, 244.71 MiB/s [2024-12-06T20:41:00.442Z] 62927.00 IOPS, 245.81 MiB/s [2024-12-06T20:41:01.375Z] 62698.00 IOPS, 244.91 MiB/s 00:13:44.242 Latency(us) 00:13:44.242 [2024-12-06T20:41:01.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.242 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:44.242 xnvme_bdev : 5.00 62446.59 243.93 0.00 0.00 1020.81 206.38 13611.32 00:13:44.242 [2024-12-06T20:41:01.375Z] =================================================================================================================== 00:13:44.242 [2024-12-06T20:41:01.375Z] Total : 62446.59 243.93 0.00 0.00 1020.81 206.38 13611.32 00:13:44.807 20:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.807 20:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:44.807 20:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:44.807 20:41:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:44.807 20:41:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.807 { 00:13:44.807 "subsystems": [ 00:13:44.807 { 00:13:44.807 "subsystem": "bdev", 00:13:44.807 "config": [ 00:13:44.807 { 00:13:44.807 "params": { 00:13:44.807 "io_mechanism": "io_uring", 00:13:44.807 "conserve_cpu": true, 00:13:44.807 "filename": "/dev/nvme0n1", 00:13:44.807 "name": "xnvme_bdev" 00:13:44.807 }, 00:13:44.807 "method": "bdev_xnvme_create" 00:13:44.807 }, 00:13:44.807 { 00:13:44.807 "method": "bdev_wait_for_examine" 00:13:44.807 } 00:13:44.807 ] 00:13:44.807 } 00:13:44.807 ] 00:13:44.807 } 00:13:44.807 [2024-12-06 20:41:01.787301] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:13:44.807 [2024-12-06 20:41:01.787409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70408 ] 00:13:45.065 [2024-12-06 20:41:01.948653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.065 [2024-12-06 20:41:02.043094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.323 Running I/O for 5 seconds... 00:13:47.190 63478.00 IOPS, 247.96 MiB/s [2024-12-06T20:41:05.695Z] 64091.00 IOPS, 250.36 MiB/s [2024-12-06T20:41:06.624Z] 61513.00 IOPS, 240.29 MiB/s [2024-12-06T20:41:07.591Z] 60595.25 IOPS, 236.70 MiB/s [2024-12-06T20:41:07.591Z] 60186.60 IOPS, 235.10 MiB/s 00:13:50.458 Latency(us) 00:13:50.458 [2024-12-06T20:41:07.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.458 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:50.458 xnvme_bdev : 5.01 60100.65 234.77 0.00 0.00 1060.42 93.34 16636.06 00:13:50.458 [2024-12-06T20:41:07.591Z] =================================================================================================================== 00:13:50.458 [2024-12-06T20:41:07.591Z] Total : 60100.65 234.77 0.00 0.00 1060.42 93.34 16636.06 00:13:51.022 00:13:51.022 real 0m12.557s 00:13:51.022 user 0m6.046s 00:13:51.022 sys 0m5.936s 00:13:51.022 ************************************ 00:13:51.022 END TEST xnvme_bdevperf 00:13:51.022 ************************************ 00:13:51.022 20:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.022 20:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 20:41:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:51.022 20:41:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.022 20:41:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.022 20:41:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 ************************************ 00:13:51.022 START TEST xnvme_fio_plugin 00:13:51.022 ************************************ 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.022 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.023 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.023 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.023 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.023 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.023 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.023 20:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.023 { 00:13:51.023 "subsystems": [ 00:13:51.023 { 00:13:51.023 "subsystem": "bdev", 00:13:51.023 "config": [ 00:13:51.023 { 00:13:51.023 "params": { 00:13:51.023 "io_mechanism": "io_uring", 00:13:51.023 "conserve_cpu": true, 00:13:51.023 "filename": "/dev/nvme0n1", 00:13:51.023 "name": "xnvme_bdev" 00:13:51.023 }, 00:13:51.023 "method": "bdev_xnvme_create" 00:13:51.023 }, 00:13:51.023 { 00:13:51.023 "method": "bdev_wait_for_examine" 00:13:51.023 } 00:13:51.023 ] 00:13:51.023 } 00:13:51.023 ] 00:13:51.023 } 00:13:51.279 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.279 fio-3.35 00:13:51.279 Starting 1 thread 00:13:57.841 00:13:57.841 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70522: Fri Dec 6 20:41:13 2024 00:13:57.841 read: IOPS=64.4k, BW=252MiB/s (264MB/s)(1258MiB/5001msec) 00:13:57.841 slat (nsec): min=2874, max=72529, avg=3522.48, stdev=1067.25 00:13:57.841 clat (usec): min=332, max=11446, avg=858.68, stdev=166.42 00:13:57.841 lat (usec): min=336, max=11449, avg=862.20, stdev=166.62 00:13:57.841 clat percentiles (usec): 00:13:57.841 | 1.00th=[ 668], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 742], 00:13:57.841 | 30.00th=[ 775], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 857], 00:13:57.841 | 70.00th=[ 889], 80.00th=[ 930], 90.00th=[ 1057], 95.00th=[ 1139], 00:13:57.841 | 99.00th=[ 1401], 99.50th=[ 1516], 99.90th=[ 1909], 99.95th=[ 2376], 00:13:57.841 | 99.99th=[ 4686] 00:13:57.841 bw ( KiB/s): min=249536, max=269312, per=100.00%, avg=258996.44, stdev=6990.93, samples=9 00:13:57.841 iops : min=62384, max=67328, avg=64749.11, stdev=1747.73, samples=9 00:13:57.841 lat (usec) : 500=0.01%, 750=22.65%, 1000=64.05% 00:13:57.841 lat (msec) : 2=13.20%, 4=0.07%, 10=0.01%, 20=0.01% 00:13:57.841 cpu : usr=42.44%, sys=54.26%, ctx=15, majf=0, minf=762 00:13:57.841 IO depths : 1=1.4%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:57.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.841 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:57.841 issued rwts: total=322039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.841 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:57.841 00:13:57.841 Run status group 0 (all jobs): 00:13:57.841 READ: bw=252MiB/s (264MB/s), 252MiB/s-252MiB/s (264MB/s-264MB/s), io=1258MiB (1319MB), run=5001-5001msec 00:13:57.841 ----------------------------------------------------- 00:13:57.841 Suppressions used: 00:13:57.841 count bytes template 00:13:57.841 1 11 /usr/src/fio/parse.c 00:13:57.841 1 8 libtcmalloc_minimal.so 00:13:57.841 1 904 libcrypto.so 00:13:57.841 ----------------------------------------------------- 00:13:57.841 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:57.841 20:41:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.841 { 00:13:57.841 "subsystems": [ 00:13:57.841 { 00:13:57.841 "subsystem": "bdev", 00:13:57.841 "config": [ 00:13:57.841 { 00:13:57.841 "params": { 00:13:57.841 "io_mechanism": "io_uring", 00:13:57.841 "conserve_cpu": true, 00:13:57.841 "filename": "/dev/nvme0n1", 00:13:57.841 "name": "xnvme_bdev" 00:13:57.841 }, 00:13:57.841 "method": "bdev_xnvme_create" 00:13:57.841 }, 00:13:57.841 { 00:13:57.841 "method": "bdev_wait_for_examine" 00:13:57.841 } 00:13:57.841 ] 00:13:57.841 } 00:13:57.841 ] 00:13:57.841 } 00:13:58.116 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:58.116 fio-3.35 00:13:58.116 Starting 1 thread 00:14:04.682 00:14:04.682 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70619: Fri Dec 6 20:41:20 2024 00:14:04.682 write: IOPS=62.3k, BW=243MiB/s (255MB/s)(1217MiB/5001msec); 0 zone resets 00:14:04.682 slat (usec): min=2, max=340, avg= 3.79, stdev= 1.70 00:14:04.682 clat (usec): min=117, max=4470, avg=882.21, stdev=181.84 00:14:04.682 lat (usec): min=120, max=4480, avg=886.00, stdev=182.35 00:14:04.682 clat percentiles (usec): 00:14:04.682 | 1.00th=[ 660], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 750], 00:14:04.682 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 873], 00:14:04.683 | 70.00th=[ 914], 80.00th=[ 971], 90.00th=[ 1106], 95.00th=[ 1221], 00:14:04.683 | 99.00th=[ 1532], 99.50th=[ 1680], 99.90th=[ 2147], 99.95th=[ 2409], 00:14:04.683 | 99.99th=[ 3097] 00:14:04.683 bw ( KiB/s): min=226096, max=269312, per=100.00%, avg=250057.56, stdev=15123.94, samples=9 00:14:04.683 iops : min=56524, max=67328, avg=62514.33, stdev=3780.91, samples=9 00:14:04.683 lat (usec) : 250=0.01%, 500=0.02%, 750=20.92%, 1000=61.29% 00:14:04.683 lat (msec) : 2=17.60%, 4=0.17%, 10=0.01% 00:14:04.683 cpu : usr=42.26%, sys=54.54%, ctx=11, majf=0, minf=763 00:14:04.683 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.683 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:04.683 issued rwts: total=0,311537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:04.683 00:14:04.683 Run status group 0 (all jobs): 00:14:04.683 WRITE: bw=243MiB/s (255MB/s), 243MiB/s-243MiB/s (255MB/s-255MB/s), io=1217MiB (1276MB), run=5001-5001msec 00:14:04.683 ----------------------------------------------------- 00:14:04.683 Suppressions used: 00:14:04.683 count bytes template 00:14:04.683 1 11 /usr/src/fio/parse.c 00:14:04.683 1 8 libtcmalloc_minimal.so 00:14:04.683 1 904 libcrypto.so 00:14:04.683 ----------------------------------------------------- 00:14:04.683 00:14:04.683 ************************************ 00:14:04.683 END TEST xnvme_fio_plugin 00:14:04.683 ************************************ 00:14:04.683 00:14:04.683 real 0m13.510s 00:14:04.683 user 0m6.902s 00:14:04.683 sys 0m5.949s 00:14:04.683 20:41:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.683 20:41:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:04.683 20:41:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:04.683 20:41:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.683 20:41:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.683 20:41:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.683 ************************************ 00:14:04.683 START TEST xnvme_rpc 00:14:04.683 ************************************ 00:14:04.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70700 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70700 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70700 ']' 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.683 20:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:04.683 [2024-12-06 20:41:21.667335] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:04.683 [2024-12-06 20:41:21.667455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70700 ] 00:14:04.942 [2024-12-06 20:41:21.818738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.942 [2024-12-06 20:41:21.914749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 xnvme_bdev 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.510 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70700 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70700 ']' 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70700 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70700 00:14:05.768 killing process with pid 70700 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70700' 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70700 00:14:05.768 20:41:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70700 00:14:07.145 ************************************ 00:14:07.145 END TEST xnvme_rpc 00:14:07.145 ************************************ 00:14:07.145 00:14:07.145 real 0m2.567s 00:14:07.145 user 0m2.670s 00:14:07.145 sys 0m0.347s 00:14:07.145 20:41:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.145 20:41:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.145 20:41:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:07.145 20:41:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.145 20:41:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.145 20:41:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.145 ************************************ 00:14:07.145 START TEST xnvme_bdevperf 00:14:07.145 ************************************ 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.145 20:41:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.145 { 00:14:07.145 "subsystems": [ 00:14:07.145 { 00:14:07.145 "subsystem": "bdev", 00:14:07.145 "config": [ 00:14:07.145 { 00:14:07.145 "params": { 00:14:07.145 "io_mechanism": "io_uring_cmd", 00:14:07.145 "conserve_cpu": false, 00:14:07.145 "filename": "/dev/ng0n1", 00:14:07.145 "name": "xnvme_bdev" 00:14:07.145 }, 00:14:07.145 "method": "bdev_xnvme_create" 00:14:07.145 }, 00:14:07.145 { 00:14:07.145 "method": "bdev_wait_for_examine" 00:14:07.145 } 00:14:07.145 ] 00:14:07.145 } 00:14:07.145 ] 00:14:07.146 } 00:14:07.146 [2024-12-06 20:41:24.258496] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:07.146 [2024-12-06 20:41:24.258600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70768 ] 00:14:07.405 [2024-12-06 20:41:24.419115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.405 [2024-12-06 20:41:24.512980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.663 Running I/O for 5 seconds... 00:14:09.967 64683.00 IOPS, 252.67 MiB/s [2024-12-06T20:41:28.031Z] 64670.00 IOPS, 252.62 MiB/s [2024-12-06T20:41:28.964Z] 64174.33 IOPS, 250.68 MiB/s [2024-12-06T20:41:29.913Z] 64060.00 IOPS, 250.23 MiB/s 00:14:12.780 Latency(us) 00:14:12.780 [2024-12-06T20:41:29.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.780 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:12.780 xnvme_bdev : 5.00 64040.58 250.16 0.00 0.00 995.37 422.20 11998.13 00:14:12.780 [2024-12-06T20:41:29.913Z] =================================================================================================================== 00:14:12.780 [2024-12-06T20:41:29.913Z] Total : 64040.58 250.16 0.00 0.00 995.37 422.20 11998.13 00:14:13.346 20:41:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.346 20:41:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:13.346 20:41:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.346 20:41:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.346 20:41:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.605 { 00:14:13.605 "subsystems": [ 00:14:13.605 { 00:14:13.605 "subsystem": "bdev", 00:14:13.605 "config": [ 00:14:13.605 { 00:14:13.605 "params": { 00:14:13.605 "io_mechanism": "io_uring_cmd", 00:14:13.605 "conserve_cpu": false, 00:14:13.605 "filename": "/dev/ng0n1", 00:14:13.605 "name": "xnvme_bdev" 00:14:13.605 }, 00:14:13.605 "method": "bdev_xnvme_create" 00:14:13.605 }, 00:14:13.605 { 00:14:13.605 "method": "bdev_wait_for_examine" 00:14:13.605 } 00:14:13.605 ] 00:14:13.605 } 00:14:13.605 ] 00:14:13.605 } 00:14:13.605 [2024-12-06 20:41:30.518017] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:13.605 [2024-12-06 20:41:30.518128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ] 00:14:13.605 [2024-12-06 20:41:30.675813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.864 [2024-12-06 20:41:30.770677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.124 Running I/O for 5 seconds... 00:14:15.995 58944.00 IOPS, 230.25 MiB/s [2024-12-06T20:41:34.063Z] 61152.00 IOPS, 238.88 MiB/s [2024-12-06T20:41:35.434Z] 62016.00 IOPS, 242.25 MiB/s [2024-12-06T20:41:36.368Z] 62320.00 IOPS, 243.44 MiB/s 00:14:19.235 Latency(us) 00:14:19.235 [2024-12-06T20:41:36.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.235 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:19.235 xnvme_bdev : 5.00 62249.96 243.16 0.00 0.00 1023.77 734.13 2911.31 00:14:19.235 [2024-12-06T20:41:36.368Z] =================================================================================================================== 00:14:19.235 [2024-12-06T20:41:36.368Z] Total : 62249.96 243.16 0.00 0.00 1023.77 734.13 2911.31 00:14:19.802 20:41:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.802 20:41:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:19.802 20:41:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:19.802 20:41:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:19.802 20:41:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.802 { 00:14:19.802 "subsystems": [ 00:14:19.802 { 00:14:19.802 "subsystem": "bdev", 00:14:19.802 "config": [ 00:14:19.802 { 00:14:19.802 "params": { 00:14:19.802 "io_mechanism": "io_uring_cmd", 00:14:19.802 "conserve_cpu": false, 00:14:19.802 "filename": "/dev/ng0n1", 00:14:19.802 "name": "xnvme_bdev" 00:14:19.802 }, 00:14:19.802 "method": "bdev_xnvme_create" 00:14:19.802 }, 00:14:19.802 { 00:14:19.802 "method": "bdev_wait_for_examine" 00:14:19.802 } 00:14:19.802 ] 00:14:19.802 } 00:14:19.802 ] 00:14:19.802 } 00:14:19.802 [2024-12-06 20:41:36.799409] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:19.802 [2024-12-06 20:41:36.799958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70913 ] 00:14:20.060 [2024-12-06 20:41:36.959650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.060 [2024-12-06 20:41:37.054749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.318 Running I/O for 5 seconds... 00:14:22.184 71872.00 IOPS, 280.75 MiB/s [2024-12-06T20:41:40.689Z] 77824.00 IOPS, 304.00 MiB/s [2024-12-06T20:41:41.622Z] 78954.67 IOPS, 308.42 MiB/s [2024-12-06T20:41:42.556Z] 78688.00 IOPS, 307.38 MiB/s [2024-12-06T20:41:42.556Z] 82112.00 IOPS, 320.75 MiB/s 00:14:25.423 Latency(us) 00:14:25.423 [2024-12-06T20:41:42.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.423 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:25.423 xnvme_bdev : 5.00 82071.29 320.59 0.00 0.00 776.36 519.88 2520.62 00:14:25.423 [2024-12-06T20:41:42.556Z] =================================================================================================================== 00:14:25.423 [2024-12-06T20:41:42.556Z] Total : 82071.29 320.59 0.00 0.00 776.36 519.88 2520.62 00:14:25.988 20:41:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.988 20:41:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:25.988 20:41:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:25.988 20:41:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:25.988 20:41:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.988 { 00:14:25.988 "subsystems": [ 00:14:25.988 { 00:14:25.988 "subsystem": "bdev", 00:14:25.988 "config": [ 00:14:25.988 { 00:14:25.988 "params": { 00:14:25.988 "io_mechanism": "io_uring_cmd", 00:14:25.988 "conserve_cpu": false, 00:14:25.988 "filename": "/dev/ng0n1", 00:14:25.988 "name": "xnvme_bdev" 00:14:25.988 }, 00:14:25.988 "method": "bdev_xnvme_create" 00:14:25.988 }, 00:14:25.988 { 00:14:25.988 "method": "bdev_wait_for_examine" 00:14:25.988 } 00:14:25.988 ] 00:14:25.988 } 00:14:25.988 ] 00:14:25.988 } 00:14:25.988 [2024-12-06 20:41:42.921724] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:25.988 [2024-12-06 20:41:42.921837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70987 ] 00:14:25.988 [2024-12-06 20:41:43.077086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.246 [2024-12-06 20:41:43.152164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.246 Running I/O for 5 seconds... 00:14:28.221 52746.00 IOPS, 206.04 MiB/s [2024-12-06T20:41:46.723Z] 43088.00 IOPS, 168.31 MiB/s [2024-12-06T20:41:47.654Z] 40840.00 IOPS, 159.53 MiB/s [2024-12-06T20:41:48.584Z] 39311.25 IOPS, 153.56 MiB/s [2024-12-06T20:41:48.584Z] 38034.80 IOPS, 148.57 MiB/s 00:14:31.451 Latency(us) 00:14:31.451 [2024-12-06T20:41:48.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.451 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:31.451 xnvme_bdev : 5.01 37996.11 148.42 0.00 0.00 1679.93 48.64 22181.42 00:14:31.451 [2024-12-06T20:41:48.584Z] =================================================================================================================== 00:14:31.451 [2024-12-06T20:41:48.584Z] Total : 37996.11 148.42 0.00 0.00 1679.93 48.64 22181.42 00:14:32.015 ************************************ 00:14:32.015 END TEST xnvme_bdevperf 00:14:32.015 ************************************ 00:14:32.015 00:14:32.015 real 0m24.879s 00:14:32.015 user 0m13.564s 00:14:32.015 sys 0m10.893s 00:14:32.015 20:41:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.015 20:41:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.015 20:41:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:32.015 20:41:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.015 20:41:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.015 20:41:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.015 ************************************ 00:14:32.015 START TEST xnvme_fio_plugin 00:14:32.015 ************************************ 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:32.015 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:32.272 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:32.272 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:32.272 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:32.272 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:32.272 20:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.272 { 00:14:32.272 "subsystems": [ 00:14:32.272 { 00:14:32.272 "subsystem": "bdev", 00:14:32.272 "config": [ 00:14:32.272 { 00:14:32.272 "params": { 00:14:32.272 "io_mechanism": "io_uring_cmd", 00:14:32.272 "conserve_cpu": false, 00:14:32.272 "filename": "/dev/ng0n1", 00:14:32.272 "name": "xnvme_bdev" 00:14:32.272 }, 00:14:32.272 "method": "bdev_xnvme_create" 00:14:32.272 }, 00:14:32.272 { 00:14:32.272 "method": "bdev_wait_for_examine" 00:14:32.272 } 00:14:32.272 ] 00:14:32.272 } 00:14:32.272 ] 00:14:32.272 } 00:14:32.272 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:32.272 fio-3.35 00:14:32.272 Starting 1 thread 00:14:38.845 00:14:38.845 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71100: Fri Dec 6 20:41:54 2024 00:14:38.845 read: IOPS=38.3k, BW=150MiB/s (157MB/s)(748MiB/5001msec) 00:14:38.845 slat (usec): min=2, max=204, avg= 3.89, stdev= 2.49 00:14:38.845 clat (usec): min=663, max=2893, avg=1514.52, stdev=299.40 00:14:38.845 lat (usec): min=666, max=2920, avg=1518.41, stdev=299.82 00:14:38.845 clat percentiles (usec): 00:14:38.845 | 1.00th=[ 914], 5.00th=[ 1045], 10.00th=[ 1139], 20.00th=[ 1254], 00:14:38.845 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1565], 00:14:38.845 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2040], 00:14:38.845 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2638], 99.95th=[ 2704], 00:14:38.845 | 99.99th=[ 2802] 00:14:38.845 bw ( KiB/s): min=136192, max=172544, per=98.95%, avg=151523.56, stdev=12823.30, samples=9 00:14:38.845 iops : min=34048, max=43136, avg=37880.89, stdev=3205.83, samples=9 00:14:38.845 lat (usec) : 750=0.03%, 1000=3.17% 00:14:38.845 lat (msec) : 2=90.52%, 4=6.28% 00:14:38.845 cpu : usr=34.52%, sys=63.66%, ctx=22, majf=0, minf=762 00:14:38.845 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:38.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.845 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:38.845 issued rwts: total=191456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:38.845 00:14:38.845 Run status group 0 (all jobs): 00:14:38.845 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=748MiB (784MB), run=5001-5001msec 00:14:38.845 ----------------------------------------------------- 00:14:38.845 Suppressions used: 00:14:38.845 count bytes template 00:14:38.845 1 11 /usr/src/fio/parse.c 00:14:38.845 1 8 libtcmalloc_minimal.so 00:14:38.845 1 904 libcrypto.so 00:14:38.845 ----------------------------------------------------- 00:14:38.845 00:14:38.845 20:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.845 20:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:38.846 20:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.846 { 00:14:38.846 "subsystems": [ 00:14:38.846 { 00:14:38.846 "subsystem": "bdev", 00:14:38.846 "config": [ 00:14:38.846 { 00:14:38.846 "params": { 00:14:38.846 "io_mechanism": "io_uring_cmd", 00:14:38.846 "conserve_cpu": false, 00:14:38.846 "filename": "/dev/ng0n1", 00:14:38.846 "name": "xnvme_bdev" 00:14:38.846 }, 00:14:38.846 "method": "bdev_xnvme_create" 00:14:38.846 }, 00:14:38.846 { 00:14:38.846 "method": "bdev_wait_for_examine" 00:14:38.846 } 00:14:38.846 ] 00:14:38.846 } 00:14:38.846 ] 00:14:38.846 } 00:14:39.104 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:39.104 fio-3.35 00:14:39.104 Starting 1 thread 00:14:45.662 00:14:45.662 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71191: Fri Dec 6 20:42:01 2024 00:14:45.662 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(762MiB/5002msec); 0 zone resets 00:14:45.662 slat (usec): min=2, max=255, avg= 3.87, stdev= 2.37 00:14:45.662 clat (usec): min=570, max=4799, avg=1487.28, stdev=269.72 00:14:45.662 lat (usec): min=574, max=4808, avg=1491.16, stdev=270.07 00:14:45.662 clat percentiles (usec): 00:14:45.662 | 1.00th=[ 930], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1254], 00:14:45.662 | 30.00th=[ 1352], 40.00th=[ 1418], 50.00th=[ 1483], 60.00th=[ 1549], 00:14:45.662 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1811], 95.00th=[ 1909], 00:14:45.662 | 99.00th=[ 2180], 99.50th=[ 2311], 99.90th=[ 2900], 99.95th=[ 3064], 00:14:45.662 | 99.99th=[ 3752] 00:14:45.662 bw ( KiB/s): min=150200, max=162232, per=99.86%, avg=155794.78, stdev=4266.96, samples=9 00:14:45.662 iops : min=37550, max=40558, avg=38948.67, stdev=1066.74, samples=9 00:14:45.662 lat (usec) : 750=0.08%, 1000=2.46% 00:14:45.662 lat (msec) : 2=94.57%, 4=2.88%, 10=0.01% 00:14:45.662 cpu : usr=38.95%, sys=59.59%, ctx=19, majf=0, minf=763 00:14:45.662 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:14:45.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.662 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:45.662 issued rwts: total=0,195084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:45.662 00:14:45.662 Run status group 0 (all jobs): 00:14:45.662 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=762MiB (799MB), run=5002-5002msec 00:14:45.662 ----------------------------------------------------- 00:14:45.662 Suppressions used: 00:14:45.662 count bytes template 00:14:45.662 1 11 /usr/src/fio/parse.c 00:14:45.662 1 8 libtcmalloc_minimal.so 00:14:45.662 1 904 libcrypto.so 00:14:45.662 ----------------------------------------------------- 00:14:45.662 00:14:45.662 ************************************ 00:14:45.662 END TEST xnvme_fio_plugin 00:14:45.662 ************************************ 00:14:45.662 00:14:45.662 real 0m13.479s 00:14:45.662 user 0m6.341s 00:14:45.662 sys 0m6.642s 00:14:45.662 20:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.662 20:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:45.662 20:42:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:45.662 20:42:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:45.662 20:42:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:45.662 20:42:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:45.662 20:42:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.662 20:42:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.662 20:42:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.662 ************************************ 00:14:45.662 START TEST xnvme_rpc 00:14:45.662 ************************************ 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:45.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71276 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71276 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71276 ']' 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.662 20:42:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.662 [2024-12-06 20:42:02.747314] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:45.662 [2024-12-06 20:42:02.748014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71276 ] 00:14:45.921 [2024-12-06 20:42:02.910260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.921 [2024-12-06 20:42:03.005233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.524 xnvme_bdev 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.524 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71276 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71276 ']' 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71276 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71276 00:14:46.782 killing process with pid 71276 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71276' 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71276 00:14:46.782 20:42:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71276 00:14:48.154 00:14:48.154 real 0m2.596s 00:14:48.154 user 0m2.718s 00:14:48.154 sys 0m0.349s 00:14:48.154 20:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.154 20:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.154 ************************************ 00:14:48.154 END TEST xnvme_rpc 00:14:48.154 ************************************ 00:14:48.411 20:42:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:48.411 20:42:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.411 20:42:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.411 20:42:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.411 ************************************ 00:14:48.411 START TEST xnvme_bdevperf 00:14:48.411 ************************************ 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.411 20:42:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.411 { 00:14:48.411 "subsystems": [ 00:14:48.411 { 00:14:48.411 "subsystem": "bdev", 00:14:48.411 "config": [ 00:14:48.411 { 00:14:48.411 "params": { 00:14:48.411 "io_mechanism": "io_uring_cmd", 00:14:48.411 "conserve_cpu": true, 00:14:48.411 "filename": "/dev/ng0n1", 00:14:48.411 "name": "xnvme_bdev" 00:14:48.411 }, 00:14:48.411 "method": "bdev_xnvme_create" 00:14:48.411 }, 00:14:48.411 { 00:14:48.411 "method": "bdev_wait_for_examine" 00:14:48.411 } 00:14:48.411 ] 00:14:48.411 } 00:14:48.411 ] 00:14:48.411 } 00:14:48.411 [2024-12-06 20:42:05.397622] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:48.411 [2024-12-06 20:42:05.397733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71339 ] 00:14:48.667 [2024-12-06 20:42:05.556339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.667 [2024-12-06 20:42:05.656281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.923 Running I/O for 5 seconds... 00:14:50.791 42346.00 IOPS, 165.41 MiB/s [2024-12-06T20:42:09.299Z] 42442.50 IOPS, 165.79 MiB/s [2024-12-06T20:42:10.231Z] 42707.00 IOPS, 166.82 MiB/s [2024-12-06T20:42:11.162Z] 42641.25 IOPS, 166.57 MiB/s [2024-12-06T20:42:11.162Z] 42688.80 IOPS, 166.75 MiB/s 00:14:54.029 Latency(us) 00:14:54.029 [2024-12-06T20:42:11.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.029 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:54.029 xnvme_bdev : 5.00 42659.15 166.64 0.00 0.00 1496.39 762.49 8368.44 00:14:54.029 [2024-12-06T20:42:11.162Z] =================================================================================================================== 00:14:54.029 [2024-12-06T20:42:11.162Z] Total : 42659.15 166.64 0.00 0.00 1496.39 762.49 8368.44 00:14:54.595 20:42:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.595 20:42:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:54.595 20:42:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.595 20:42:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.595 20:42:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.595 { 00:14:54.595 "subsystems": [ 00:14:54.595 { 00:14:54.595 "subsystem": "bdev", 00:14:54.595 "config": [ 00:14:54.595 { 00:14:54.595 "params": { 00:14:54.595 "io_mechanism": "io_uring_cmd", 00:14:54.595 "conserve_cpu": true, 00:14:54.595 "filename": "/dev/ng0n1", 00:14:54.595 "name": "xnvme_bdev" 00:14:54.595 }, 00:14:54.595 "method": "bdev_xnvme_create" 00:14:54.595 }, 00:14:54.595 { 00:14:54.595 "method": "bdev_wait_for_examine" 00:14:54.595 } 00:14:54.595 ] 00:14:54.595 } 00:14:54.595 ] 00:14:54.595 } 00:14:54.595 [2024-12-06 20:42:11.686024] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:14:54.595 [2024-12-06 20:42:11.686135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71414 ] 00:14:54.855 [2024-12-06 20:42:11.846548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.855 [2024-12-06 20:42:11.956002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.115 Running I/O for 5 seconds... 00:14:57.450 37495.00 IOPS, 146.46 MiB/s [2024-12-06T20:42:15.525Z] 36999.00 IOPS, 144.53 MiB/s [2024-12-06T20:42:16.459Z] 36668.67 IOPS, 143.24 MiB/s [2024-12-06T20:42:17.392Z] 37172.00 IOPS, 145.20 MiB/s 00:15:00.259 Latency(us) 00:15:00.259 [2024-12-06T20:42:17.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.259 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:00.259 xnvme_bdev : 5.00 38374.87 149.90 0.00 0.00 1663.13 627.00 5696.59 00:15:00.259 [2024-12-06T20:42:17.392Z] =================================================================================================================== 00:15:00.259 [2024-12-06T20:42:17.392Z] Total : 38374.87 149.90 0.00 0.00 1663.13 627.00 5696.59 00:15:01.192 20:42:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.192 20:42:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:01.192 20:42:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:01.192 20:42:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:01.192 20:42:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.192 { 00:15:01.192 "subsystems": [ 00:15:01.192 { 00:15:01.192 "subsystem": "bdev", 00:15:01.192 "config": [ 00:15:01.192 { 00:15:01.192 "params": { 00:15:01.192 "io_mechanism": "io_uring_cmd", 00:15:01.192 "conserve_cpu": true, 00:15:01.192 "filename": "/dev/ng0n1", 00:15:01.193 "name": "xnvme_bdev" 00:15:01.193 }, 00:15:01.193 "method": "bdev_xnvme_create" 00:15:01.193 }, 00:15:01.193 { 00:15:01.193 "method": "bdev_wait_for_examine" 00:15:01.193 } 00:15:01.193 ] 00:15:01.193 } 00:15:01.193 ] 00:15:01.193 } 00:15:01.193 [2024-12-06 20:42:18.039920] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:01.193 [2024-12-06 20:42:18.040030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71493 ] 00:15:01.193 [2024-12-06 20:42:18.193583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.193 [2024-12-06 20:42:18.287269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.451 Running I/O for 5 seconds... 00:15:03.758 84736.00 IOPS, 331.00 MiB/s [2024-12-06T20:42:21.826Z] 83648.00 IOPS, 326.75 MiB/s [2024-12-06T20:42:22.761Z] 83818.67 IOPS, 327.42 MiB/s [2024-12-06T20:42:23.703Z] 84592.00 IOPS, 330.44 MiB/s [2024-12-06T20:42:23.703Z] 87001.60 IOPS, 339.85 MiB/s 00:15:06.570 Latency(us) 00:15:06.570 [2024-12-06T20:42:23.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.570 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:06.570 xnvme_bdev : 5.00 86954.82 339.67 0.00 0.00 732.67 322.95 2886.10 00:15:06.570 [2024-12-06T20:42:23.703Z] =================================================================================================================== 00:15:06.570 [2024-12-06T20:42:23.703Z] Total : 86954.82 339.67 0.00 0.00 732.67 322.95 2886.10 00:15:07.140 20:42:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.140 20:42:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:07.140 20:42:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:07.140 20:42:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:07.140 20:42:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:07.140 { 00:15:07.140 "subsystems": [ 00:15:07.140 { 00:15:07.140 "subsystem": "bdev", 00:15:07.140 "config": [ 00:15:07.140 { 00:15:07.140 "params": { 00:15:07.140 "io_mechanism": "io_uring_cmd", 00:15:07.140 "conserve_cpu": true, 00:15:07.140 "filename": "/dev/ng0n1", 00:15:07.140 "name": "xnvme_bdev" 00:15:07.140 }, 00:15:07.140 "method": "bdev_xnvme_create" 00:15:07.140 }, 00:15:07.140 { 00:15:07.140 "method": "bdev_wait_for_examine" 00:15:07.140 } 00:15:07.140 ] 00:15:07.140 } 00:15:07.140 ] 00:15:07.140 } 00:15:07.140 [2024-12-06 20:42:24.159820] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:07.140 [2024-12-06 20:42:24.159954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71562 ] 00:15:07.399 [2024-12-06 20:42:24.317836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.399 [2024-12-06 20:42:24.408302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.660 Running I/O for 5 seconds... 00:15:09.558 52577.00 IOPS, 205.38 MiB/s [2024-12-06T20:42:27.632Z] 55403.50 IOPS, 216.42 MiB/s [2024-12-06T20:42:29.020Z] 55658.00 IOPS, 217.41 MiB/s [2024-12-06T20:42:29.963Z] 52715.25 IOPS, 205.92 MiB/s [2024-12-06T20:42:29.963Z] 48860.80 IOPS, 190.86 MiB/s 00:15:12.830 Latency(us) 00:15:12.830 [2024-12-06T20:42:29.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.830 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:12.830 xnvme_bdev : 5.01 48792.50 190.60 0.00 0.00 1306.83 55.53 23189.66 00:15:12.830 [2024-12-06T20:42:29.963Z] =================================================================================================================== 00:15:12.830 [2024-12-06T20:42:29.963Z] Total : 48792.50 190.60 0.00 0.00 1306.83 55.53 23189.66 00:15:13.402 00:15:13.402 real 0m25.080s 00:15:13.402 user 0m16.401s 00:15:13.402 sys 0m6.660s 00:15:13.402 20:42:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.402 20:42:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 ************************************ 00:15:13.402 END TEST xnvme_bdevperf 00:15:13.402 ************************************ 00:15:13.402 20:42:30 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:13.402 20:42:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:13.402 20:42:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.402 20:42:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 ************************************ 00:15:13.402 START TEST xnvme_fio_plugin 00:15:13.402 ************************************ 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:13.402 20:42:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:13.402 { 00:15:13.402 "subsystems": [ 00:15:13.402 { 00:15:13.402 "subsystem": "bdev", 00:15:13.402 "config": [ 00:15:13.402 { 00:15:13.402 "params": { 00:15:13.402 "io_mechanism": "io_uring_cmd", 00:15:13.402 "conserve_cpu": true, 00:15:13.402 "filename": "/dev/ng0n1", 00:15:13.402 "name": "xnvme_bdev" 00:15:13.402 }, 00:15:13.402 "method": "bdev_xnvme_create" 00:15:13.402 }, 00:15:13.402 { 00:15:13.402 "method": "bdev_wait_for_examine" 00:15:13.402 } 00:15:13.402 ] 00:15:13.402 } 00:15:13.402 ] 00:15:13.402 } 00:15:13.663 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:13.663 fio-3.35 00:15:13.663 Starting 1 thread 00:15:20.241 00:15:20.241 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71680: Fri Dec 6 20:42:36 2024 00:15:20.241 read: IOPS=35.2k, BW=137MiB/s (144MB/s)(687MiB/5001msec) 00:15:20.241 slat (nsec): min=2886, max=82925, avg=3916.91, stdev=2262.83 00:15:20.241 clat (usec): min=866, max=3237, avg=1661.87, stdev=297.98 00:15:20.241 lat (usec): min=869, max=3261, avg=1665.79, stdev=298.51 00:15:20.241 clat percentiles (usec): 00:15:20.241 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1319], 20.00th=[ 1401], 00:15:20.241 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1696], 00:15:20.241 | 70.00th=[ 1795], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:15:20.241 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 2868], 99.95th=[ 2933], 00:15:20.241 | 99.99th=[ 3097] 00:15:20.241 bw ( KiB/s): min=134144, max=150016, per=100.00%, avg=141084.44, stdev=5949.51, samples=9 00:15:20.241 iops : min=33536, max=37504, avg=35271.11, stdev=1487.38, samples=9 00:15:20.241 lat (usec) : 1000=0.11% 00:15:20.241 lat (msec) : 2=86.28%, 4=13.61% 00:15:20.241 cpu : usr=55.68%, sys=41.02%, ctx=11, majf=0, minf=762 00:15:20.241 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:20.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.241 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:20.241 issued rwts: total=175808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:20.241 00:15:20.241 Run status group 0 (all jobs): 00:15:20.241 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=687MiB (720MB), run=5001-5001msec 00:15:20.241 ----------------------------------------------------- 00:15:20.241 Suppressions used: 00:15:20.241 count bytes template 00:15:20.241 1 11 /usr/src/fio/parse.c 00:15:20.241 1 8 libtcmalloc_minimal.so 00:15:20.241 1 904 libcrypto.so 00:15:20.241 ----------------------------------------------------- 00:15:20.241 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:20.502 20:42:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:20.502 { 00:15:20.502 "subsystems": [ 00:15:20.502 { 00:15:20.502 "subsystem": "bdev", 00:15:20.502 "config": [ 00:15:20.502 { 00:15:20.502 "params": { 00:15:20.502 "io_mechanism": "io_uring_cmd", 00:15:20.502 "conserve_cpu": true, 00:15:20.502 "filename": "/dev/ng0n1", 00:15:20.502 "name": "xnvme_bdev" 00:15:20.502 }, 00:15:20.502 "method": "bdev_xnvme_create" 00:15:20.502 }, 00:15:20.502 { 00:15:20.502 "method": "bdev_wait_for_examine" 00:15:20.502 } 00:15:20.502 ] 00:15:20.502 } 00:15:20.502 ] 00:15:20.502 } 00:15:20.502 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:20.502 fio-3.35 00:15:20.502 Starting 1 thread 00:15:27.117 00:15:27.117 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71765: Fri Dec 6 20:42:43 2024 00:15:27.117 write: IOPS=35.2k, BW=137MiB/s (144MB/s)(687MiB/5001msec); 0 zone resets 00:15:27.117 slat (usec): min=2, max=141, avg= 4.34, stdev= 2.78 00:15:27.117 clat (usec): min=342, max=5661, avg=1643.43, stdev=297.57 00:15:27.117 lat (usec): min=347, max=5664, avg=1647.77, stdev=298.31 00:15:27.117 clat percentiles (usec): 00:15:27.117 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1401], 00:15:27.117 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1680], 00:15:27.117 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2024], 95.00th=[ 2180], 00:15:27.117 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 3359], 99.95th=[ 3687], 00:15:27.117 | 99.99th=[ 4621] 00:15:27.117 bw ( KiB/s): min=133888, max=147632, per=100.00%, avg=140879.11, stdev=3717.29, samples=9 00:15:27.117 iops : min=33472, max=36908, avg=35219.78, stdev=929.32, samples=9 00:15:27.117 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.17% 00:15:27.117 lat (msec) : 2=88.72%, 4=11.08%, 10=0.02% 00:15:27.117 cpu : usr=51.90%, sys=43.88%, ctx=21, majf=0, minf=763 00:15:27.117 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:27.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.117 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:27.117 issued rwts: total=0,175863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.117 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.117 00:15:27.117 Run status group 0 (all jobs): 00:15:27.117 WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=687MiB (720MB), run=5001-5001msec 00:15:27.377 ----------------------------------------------------- 00:15:27.377 Suppressions used: 00:15:27.377 count bytes template 00:15:27.377 1 11 /usr/src/fio/parse.c 00:15:27.377 1 8 libtcmalloc_minimal.so 00:15:27.377 1 904 libcrypto.so 00:15:27.377 ----------------------------------------------------- 00:15:27.377 00:15:27.377 00:15:27.377 real 0m13.830s 00:15:27.377 user 0m8.290s 00:15:27.377 sys 0m4.843s 00:15:27.377 20:42:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.377 20:42:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:27.377 ************************************ 00:15:27.377 END TEST xnvme_fio_plugin 00:15:27.377 ************************************ 00:15:27.377 20:42:44 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71276 00:15:27.377 20:42:44 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71276 ']' 00:15:27.377 Process with pid 71276 is not found 00:15:27.377 20:42:44 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71276 00:15:27.377 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71276) - No such process 00:15:27.377 20:42:44 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71276 is not found' 00:15:27.377 ************************************ 00:15:27.377 END TEST nvme_xnvme 00:15:27.377 00:15:27.377 real 3m25.943s 00:15:27.377 user 1m51.895s 00:15:27.377 sys 1m19.681s 00:15:27.377 20:42:44 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.377 20:42:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.377 ************************************ 00:15:27.377 20:42:44 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:27.377 20:42:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.377 20:42:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.377 20:42:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.377 ************************************ 00:15:27.377 START TEST blockdev_xnvme 00:15:27.377 ************************************ 00:15:27.377 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:27.377 * Looking for test storage... 00:15:27.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:27.377 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.377 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.377 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.637 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.637 20:42:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.638 20:42:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.638 --rc genhtml_branch_coverage=1 00:15:27.638 --rc genhtml_function_coverage=1 00:15:27.638 --rc genhtml_legend=1 00:15:27.638 --rc geninfo_all_blocks=1 00:15:27.638 --rc geninfo_unexecuted_blocks=1 00:15:27.638 00:15:27.638 ' 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.638 --rc genhtml_branch_coverage=1 00:15:27.638 --rc genhtml_function_coverage=1 00:15:27.638 --rc genhtml_legend=1 00:15:27.638 --rc geninfo_all_blocks=1 00:15:27.638 --rc geninfo_unexecuted_blocks=1 00:15:27.638 00:15:27.638 ' 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.638 --rc genhtml_branch_coverage=1 00:15:27.638 --rc genhtml_function_coverage=1 00:15:27.638 --rc genhtml_legend=1 00:15:27.638 --rc geninfo_all_blocks=1 00:15:27.638 --rc geninfo_unexecuted_blocks=1 00:15:27.638 00:15:27.638 ' 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.638 --rc genhtml_branch_coverage=1 00:15:27.638 --rc genhtml_function_coverage=1 00:15:27.638 --rc genhtml_legend=1 00:15:27.638 --rc geninfo_all_blocks=1 00:15:27.638 --rc geninfo_unexecuted_blocks=1 00:15:27.638 00:15:27.638 ' 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71905 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:27.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71905 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71905 ']' 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.638 20:42:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.638 20:42:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:27.638 [2024-12-06 20:42:44.671518] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:27.638 [2024-12-06 20:42:44.671677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71905 ] 00:15:27.899 [2024-12-06 20:42:44.834645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.899 [2024-12-06 20:42:44.959615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.841 20:42:45 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.841 20:42:45 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:28.841 20:42:45 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:28.841 20:42:45 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:28.841 20:42:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:28.841 20:42:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:28.841 20:42:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:29.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.675 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:29.675 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:29.675 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:29.675 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:29.675 nvme0n1 00:15:29.675 nvme0n2 00:15:29.675 nvme0n3 00:15:29.675 nvme1n1 00:15:29.675 nvme2n1 00:15:29.675 nvme3n1 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.675 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.675 20:42:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.936 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:29.936 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.936 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.936 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.936 20:42:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "779fcd32-0a0b-41d2-82b4-6ffa5fe5e272"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "779fcd32-0a0b-41d2-82b4-6ffa5fe5e272",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e3a42682-f7d9-48f1-8232-07bc81e33d99"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3a42682-f7d9-48f1-8232-07bc81e33d99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ed91b765-2db3-4320-a8f6-7f1845444d34"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed91b765-2db3-4320-a8f6-7f1845444d34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "949b22b0-2688-4532-9195-db780f0d7aff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "949b22b0-2688-4532-9195-db780f0d7aff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6b63bf5e-6b0a-4378-908b-71cb97a5d6ba"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6b63bf5e-6b0a-4378-908b-71cb97a5d6ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "712291ce-a167-4209-a190-7a6fe041edc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "712291ce-a167-4209-a190-7a6fe041edc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:29.937 20:42:46 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71905 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71905 ']' 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71905 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71905 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.937 killing process with pid 71905 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71905' 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71905 00:15:29.937 20:42:46 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71905 00:15:31.851 20:42:48 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:31.851 20:42:48 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:31.851 20:42:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:31.851 20:42:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.851 20:42:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.852 ************************************ 00:15:31.852 START TEST bdev_hello_world 00:15:31.852 ************************************ 00:15:31.852 20:42:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:31.852 [2024-12-06 20:42:48.683492] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:31.852 [2024-12-06 20:42:48.683644] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72184 ] 00:15:31.852 [2024-12-06 20:42:48.848169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.852 [2024-12-06 20:42:48.980544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.424 [2024-12-06 20:42:49.394319] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:32.424 [2024-12-06 20:42:49.394394] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:32.424 [2024-12-06 20:42:49.394413] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:32.424 [2024-12-06 20:42:49.396607] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:32.424 [2024-12-06 20:42:49.397060] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:32.424 [2024-12-06 20:42:49.397084] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:32.424 [2024-12-06 20:42:49.397374] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:32.424 00:15:32.424 [2024-12-06 20:42:49.397406] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:33.417 00:15:33.417 real 0m1.600s 00:15:33.417 user 0m1.206s 00:15:33.417 sys 0m0.243s 00:15:33.417 ************************************ 00:15:33.417 END TEST bdev_hello_world 00:15:33.417 ************************************ 00:15:33.417 20:42:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.417 20:42:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:33.417 20:42:50 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:33.417 20:42:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.417 20:42:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.417 20:42:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.417 ************************************ 00:15:33.417 START TEST bdev_bounds 00:15:33.417 ************************************ 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72220 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:33.417 Process bdevio pid: 72220 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72220' 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72220 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72220 ']' 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:33.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.417 20:42:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:33.417 [2024-12-06 20:42:50.342763] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:33.417 [2024-12-06 20:42:50.342938] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72220 ] 00:15:33.417 [2024-12-06 20:42:50.508395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.679 [2024-12-06 20:42:50.643485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.679 [2024-12-06 20:42:50.644244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.679 [2024-12-06 20:42:50.644395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.252 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.252 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:34.252 20:42:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:34.252 I/O targets: 00:15:34.252 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:34.252 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:34.253 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:34.253 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:34.253 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:34.253 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:34.253 00:15:34.253 00:15:34.253 CUnit - A unit testing framework for C - Version 2.1-3 00:15:34.253 http://cunit.sourceforge.net/ 00:15:34.253 00:15:34.253 00:15:34.253 Suite: bdevio tests on: nvme3n1 00:15:34.253 Test: blockdev write read block ...passed 00:15:34.253 Test: blockdev write zeroes read block ...passed 00:15:34.253 Test: blockdev write zeroes read no split ...passed 00:15:34.253 Test: blockdev write zeroes read split ...passed 00:15:34.253 Test: blockdev write zeroes read split partial ...passed 00:15:34.253 Test: blockdev reset ...passed 00:15:34.253 Test: blockdev write read 8 blocks ...passed 00:15:34.253 Test: blockdev write read size > 128k ...passed 00:15:34.253 Test: blockdev write read invalid size ...passed 00:15:34.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.253 Test: blockdev write read max offset ...passed 00:15:34.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.253 Test: blockdev writev readv 8 blocks ...passed 00:15:34.253 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.253 Test: blockdev writev readv block ...passed 00:15:34.253 Test: blockdev writev readv size > 128k ...passed 00:15:34.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.513 Test: blockdev comparev and writev ...passed 00:15:34.513 Test: blockdev nvme passthru rw ...passed 00:15:34.513 Test: blockdev nvme passthru vendor specific ...passed 00:15:34.513 Test: blockdev nvme admin passthru ...passed 00:15:34.513 Test: blockdev copy ...passed 00:15:34.513 Suite: bdevio tests on: nvme2n1 00:15:34.513 Test: blockdev write read block ...passed 00:15:34.513 Test: blockdev write zeroes read block ...passed 00:15:34.513 Test: blockdev write zeroes read no split ...passed 00:15:34.513 Test: blockdev write zeroes read split ...passed 00:15:34.513 Test: blockdev write zeroes read split partial ...passed 00:15:34.513 Test: blockdev reset ...passed 00:15:34.513 Test: blockdev write read 8 blocks ...passed 00:15:34.513 Test: blockdev write read size > 128k ...passed 00:15:34.513 Test: blockdev write read invalid size ...passed 00:15:34.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.513 Test: blockdev write read max offset ...passed 00:15:34.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.513 Test: blockdev writev readv 8 blocks ...passed 00:15:34.513 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.513 Test: blockdev writev readv block ...passed 00:15:34.513 Test: blockdev writev readv size > 128k ...passed 00:15:34.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.513 Test: blockdev comparev and writev ...passed 00:15:34.513 Test: blockdev nvme passthru rw ...passed 00:15:34.513 Test: blockdev nvme passthru vendor specific ...passed 00:15:34.513 Test: blockdev nvme admin passthru ...passed 00:15:34.513 Test: blockdev copy ...passed 00:15:34.513 Suite: bdevio tests on: nvme1n1 00:15:34.513 Test: blockdev write read block ...passed 00:15:34.513 Test: blockdev write zeroes read block ...passed 00:15:34.513 Test: blockdev write zeroes read no split ...passed 00:15:34.513 Test: blockdev write zeroes read split ...passed 00:15:34.513 Test: blockdev write zeroes read split partial ...passed 00:15:34.513 Test: blockdev reset ...passed 00:15:34.513 Test: blockdev write read 8 blocks ...passed 00:15:34.513 Test: blockdev write read size > 128k ...passed 00:15:34.513 Test: blockdev write read invalid size ...passed 00:15:34.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.513 Test: blockdev write read max offset ...passed 00:15:34.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.513 Test: blockdev writev readv 8 blocks ...passed 00:15:34.513 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.513 Test: blockdev writev readv block ...passed 00:15:34.513 Test: blockdev writev readv size > 128k ...passed 00:15:34.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.513 Test: blockdev comparev and writev ...passed 00:15:34.513 Test: blockdev nvme passthru rw ...passed 00:15:34.513 Test: blockdev nvme passthru vendor specific ...passed 00:15:34.513 Test: blockdev nvme admin passthru ...passed 00:15:34.513 Test: blockdev copy ...passed 00:15:34.513 Suite: bdevio tests on: nvme0n3 00:15:34.513 Test: blockdev write read block ...passed 00:15:34.513 Test: blockdev write zeroes read block ...passed 00:15:34.513 Test: blockdev write zeroes read no split ...passed 00:15:34.513 Test: blockdev write zeroes read split ...passed 00:15:34.513 Test: blockdev write zeroes read split partial ...passed 00:15:34.513 Test: blockdev reset ...passed 00:15:34.513 Test: blockdev write read 8 blocks ...passed 00:15:34.513 Test: blockdev write read size > 128k ...passed 00:15:34.513 Test: blockdev write read invalid size ...passed 00:15:34.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.513 Test: blockdev write read max offset ...passed 00:15:34.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.513 Test: blockdev writev readv 8 blocks ...passed 00:15:34.513 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.513 Test: blockdev writev readv block ...passed 00:15:34.513 Test: blockdev writev readv size > 128k ...passed 00:15:34.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.513 Test: blockdev comparev and writev ...passed 00:15:34.513 Test: blockdev nvme passthru rw ...passed 00:15:34.513 Test: blockdev nvme passthru vendor specific ...passed 00:15:34.513 Test: blockdev nvme admin passthru ...passed 00:15:34.513 Test: blockdev copy ...passed 00:15:34.513 Suite: bdevio tests on: nvme0n2 00:15:34.513 Test: blockdev write read block ...passed 00:15:34.513 Test: blockdev write zeroes read block ...passed 00:15:34.513 Test: blockdev write zeroes read no split ...passed 00:15:34.773 Test: blockdev write zeroes read split ...passed 00:15:34.773 Test: blockdev write zeroes read split partial ...passed 00:15:34.773 Test: blockdev reset ...passed 00:15:34.773 Test: blockdev write read 8 blocks ...passed 00:15:34.773 Test: blockdev write read size > 128k ...passed 00:15:34.773 Test: blockdev write read invalid size ...passed 00:15:34.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.773 Test: blockdev write read max offset ...passed 00:15:34.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.773 Test: blockdev writev readv 8 blocks ...passed 00:15:34.773 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.773 Test: blockdev writev readv block ...passed 00:15:34.773 Test: blockdev writev readv size > 128k ...passed 00:15:34.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.773 Test: blockdev comparev and writev ...passed 00:15:34.773 Test: blockdev nvme passthru rw ...passed 00:15:34.773 Test: blockdev nvme passthru vendor specific ...passed 00:15:34.773 Test: blockdev nvme admin passthru ...passed 00:15:34.773 Test: blockdev copy ...passed 00:15:34.773 Suite: bdevio tests on: nvme0n1 00:15:34.773 Test: blockdev write read block ...passed 00:15:34.773 Test: blockdev write zeroes read block ...passed 00:15:34.773 Test: blockdev write zeroes read no split ...passed 00:15:34.773 Test: blockdev write zeroes read split ...passed 00:15:34.773 Test: blockdev write zeroes read split partial ...passed 00:15:34.773 Test: blockdev reset ...passed 00:15:34.773 Test: blockdev write read 8 blocks ...passed 00:15:34.773 Test: blockdev write read size > 128k ...passed 00:15:34.773 Test: blockdev write read invalid size ...passed 00:15:34.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.774 Test: blockdev write read max offset ...passed 00:15:34.774 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.774 Test: blockdev writev readv 8 blocks ...passed 00:15:34.774 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.774 Test: blockdev writev readv block ...passed 00:15:34.774 Test: blockdev writev readv size > 128k ...passed 00:15:34.774 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.774 Test: blockdev comparev and writev ...passed 00:15:34.774 Test: blockdev nvme passthru rw ...passed 00:15:34.774 Test: blockdev nvme passthru vendor specific ...passed 00:15:34.774 Test: blockdev nvme admin passthru ...passed 00:15:34.774 Test: blockdev copy ...passed 00:15:34.774 00:15:34.774 Run Summary: Type Total Ran Passed Failed Inactive 00:15:34.774 suites 6 6 n/a 0 0 00:15:34.774 tests 138 138 138 0 0 00:15:34.774 asserts 780 780 780 0 n/a 00:15:34.774 00:15:34.774 Elapsed time = 1.229 seconds 00:15:34.774 0 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72220 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72220 ']' 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72220 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72220 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72220' 00:15:34.774 killing process with pid 72220 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72220 00:15:34.774 20:42:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72220 00:15:35.717 20:42:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:35.717 00:15:35.717 real 0m2.368s 00:15:35.717 user 0m5.735s 00:15:35.717 sys 0m0.366s 00:15:35.717 20:42:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.717 20:42:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:35.717 ************************************ 00:15:35.717 END TEST bdev_bounds 00:15:35.717 ************************************ 00:15:35.717 20:42:52 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:35.717 20:42:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:35.717 20:42:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.717 20:42:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.717 ************************************ 00:15:35.717 START TEST bdev_nbd 00:15:35.717 ************************************ 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:35.717 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72283 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72283 /var/tmp/spdk-nbd.sock 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72283 ']' 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.718 20:42:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:35.718 [2024-12-06 20:42:52.788780] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:35.718 [2024-12-06 20:42:52.788953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.980 [2024-12-06 20:42:52.955532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.980 [2024-12-06 20:42:53.089100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:36.552 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.814 1+0 records in 00:15:36.814 1+0 records out 00:15:36.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766658 s, 5.3 MB/s 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:36.814 20:42:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.076 1+0 records in 00:15:37.076 1+0 records out 00:15:37.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106508 s, 3.8 MB/s 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.076 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.338 1+0 records in 00:15:37.338 1+0 records out 00:15:37.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104051 s, 3.9 MB/s 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.338 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:37.599 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.600 1+0 records in 00:15:37.600 1+0 records out 00:15:37.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159101 s, 2.6 MB/s 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.600 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.861 1+0 records in 00:15:37.861 1+0 records out 00:15:37.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119949 s, 3.4 MB/s 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.861 20:42:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.132 1+0 records in 00:15:38.132 1+0 records out 00:15:38.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00157429 s, 2.6 MB/s 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:38.132 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd0", 00:15:38.397 "bdev_name": "nvme0n1" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd1", 00:15:38.397 "bdev_name": "nvme0n2" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd2", 00:15:38.397 "bdev_name": "nvme0n3" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd3", 00:15:38.397 "bdev_name": "nvme1n1" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd4", 00:15:38.397 "bdev_name": "nvme2n1" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd5", 00:15:38.397 "bdev_name": "nvme3n1" 00:15:38.397 } 00:15:38.397 ]' 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd0", 00:15:38.397 "bdev_name": "nvme0n1" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd1", 00:15:38.397 "bdev_name": "nvme0n2" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd2", 00:15:38.397 "bdev_name": "nvme0n3" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd3", 00:15:38.397 "bdev_name": "nvme1n1" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd4", 00:15:38.397 "bdev_name": "nvme2n1" 00:15:38.397 }, 00:15:38.397 { 00:15:38.397 "nbd_device": "/dev/nbd5", 00:15:38.397 "bdev_name": "nvme3n1" 00:15:38.397 } 00:15:38.397 ]' 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.397 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.657 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.917 20:42:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.178 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.440 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.702 20:42:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:39.963 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:39.964 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:40.226 /dev/nbd0 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.226 1+0 records in 00:15:40.226 1+0 records out 00:15:40.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105632 s, 3.9 MB/s 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.226 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:40.487 /dev/nbd1 00:15:40.487 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.487 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.487 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:40.487 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.488 1+0 records in 00:15:40.488 1+0 records out 00:15:40.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123246 s, 3.3 MB/s 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.488 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:40.749 /dev/nbd10 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.749 1+0 records in 00:15:40.749 1+0 records out 00:15:40.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975008 s, 4.2 MB/s 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.749 20:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:41.010 /dev/nbd11 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.010 1+0 records in 00:15:41.010 1+0 records out 00:15:41.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120999 s, 3.4 MB/s 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.010 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.011 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.011 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.011 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:41.272 /dev/nbd12 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.272 1+0 records in 00:15:41.272 1+0 records out 00:15:41.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00196558 s, 2.1 MB/s 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.272 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.273 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.273 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.273 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.273 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:41.536 /dev/nbd13 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.536 1+0 records in 00:15:41.536 1+0 records out 00:15:41.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104537 s, 3.9 MB/s 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.536 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd0", 00:15:41.799 "bdev_name": "nvme0n1" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd1", 00:15:41.799 "bdev_name": "nvme0n2" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd10", 00:15:41.799 "bdev_name": "nvme0n3" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd11", 00:15:41.799 "bdev_name": "nvme1n1" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd12", 00:15:41.799 "bdev_name": "nvme2n1" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd13", 00:15:41.799 "bdev_name": "nvme3n1" 00:15:41.799 } 00:15:41.799 ]' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd0", 00:15:41.799 "bdev_name": "nvme0n1" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd1", 00:15:41.799 "bdev_name": "nvme0n2" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd10", 00:15:41.799 "bdev_name": "nvme0n3" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd11", 00:15:41.799 "bdev_name": "nvme1n1" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd12", 00:15:41.799 "bdev_name": "nvme2n1" 00:15:41.799 }, 00:15:41.799 { 00:15:41.799 "nbd_device": "/dev/nbd13", 00:15:41.799 "bdev_name": "nvme3n1" 00:15:41.799 } 00:15:41.799 ]' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:41.799 /dev/nbd1 00:15:41.799 /dev/nbd10 00:15:41.799 /dev/nbd11 00:15:41.799 /dev/nbd12 00:15:41.799 /dev/nbd13' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:41.799 /dev/nbd1 00:15:41.799 /dev/nbd10 00:15:41.799 /dev/nbd11 00:15:41.799 /dev/nbd12 00:15:41.799 /dev/nbd13' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:41.799 256+0 records in 00:15:41.799 256+0 records out 00:15:41.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684119 s, 153 MB/s 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:41.799 20:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:42.061 256+0 records in 00:15:42.061 256+0 records out 00:15:42.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244839 s, 4.3 MB/s 00:15:42.061 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.061 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:42.323 256+0 records in 00:15:42.323 256+0 records out 00:15:42.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245389 s, 4.3 MB/s 00:15:42.323 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.323 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:42.589 256+0 records in 00:15:42.589 256+0 records out 00:15:42.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248644 s, 4.2 MB/s 00:15:42.589 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.589 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:42.851 256+0 records in 00:15:42.851 256+0 records out 00:15:42.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.253437 s, 4.1 MB/s 00:15:42.851 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.851 20:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:43.113 256+0 records in 00:15:43.113 256+0 records out 00:15:43.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211757 s, 5.0 MB/s 00:15:43.113 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.113 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:43.378 256+0 records in 00:15:43.378 256+0 records out 00:15:43.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.295758 s, 3.5 MB/s 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:43.378 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.379 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.648 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.911 20:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.173 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.435 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.697 20:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:44.959 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:44.959 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:44.959 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:45.221 malloc_lvol_verify 00:15:45.221 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:45.483 936edd03-6301-47cf-8e32-64325077be96 00:15:45.483 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:45.745 2c152cf9-7789-4864-837b-d0ad6ae4a9d5 00:15:45.745 20:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:46.007 /dev/nbd0 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:46.007 mke2fs 1.47.0 (5-Feb-2023) 00:15:46.007 Discarding device blocks: 0/4096 done 00:15:46.007 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:46.007 00:15:46.007 Allocating group tables: 0/1 done 00:15:46.007 Writing inode tables: 0/1 done 00:15:46.007 Creating journal (1024 blocks): done 00:15:46.007 Writing superblocks and filesystem accounting information: 0/1 done 00:15:46.007 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.007 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72283 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72283 ']' 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72283 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72283 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.269 killing process with pid 72283 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72283' 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72283 00:15:46.269 20:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72283 00:15:47.215 20:43:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:47.215 00:15:47.215 real 0m11.381s 00:15:47.215 user 0m15.175s 00:15:47.215 sys 0m3.971s 00:15:47.215 ************************************ 00:15:47.215 END TEST bdev_nbd 00:15:47.215 ************************************ 00:15:47.215 20:43:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.215 20:43:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:47.215 20:43:04 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:47.215 20:43:04 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:47.215 20:43:04 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:47.215 20:43:04 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:47.215 20:43:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.215 20:43:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.215 20:43:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.215 ************************************ 00:15:47.215 START TEST bdev_fio 00:15:47.215 ************************************ 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:47.215 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:47.215 ************************************ 00:15:47.215 START TEST bdev_fio_rw_verify 00:15:47.215 ************************************ 00:15:47.215 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.216 20:43:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.477 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.477 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.477 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.477 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.477 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.477 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.477 fio-3.35 00:15:47.477 Starting 6 threads 00:15:59.715 00:15:59.715 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72695: Fri Dec 6 20:43:15 2024 00:15:59.715 read: IOPS=18.2k, BW=71.0MiB/s (74.5MB/s)(710MiB/10002msec) 00:15:59.715 slat (usec): min=2, max=2375, avg= 5.75, stdev=14.68 00:15:59.715 clat (usec): min=72, max=13352, avg=1032.44, stdev=757.65 00:15:59.715 lat (usec): min=76, max=13376, avg=1038.18, stdev=758.48 00:15:59.715 clat percentiles (usec): 00:15:59.715 | 50.000th=[ 832], 99.000th=[ 3425], 99.900th=[ 4883], 99.990th=[ 8160], 00:15:59.715 | 99.999th=[13304] 00:15:59.715 write: IOPS=18.5k, BW=72.3MiB/s (75.9MB/s)(724MiB/10002msec); 0 zone resets 00:15:59.715 slat (usec): min=12, max=4020, avg=36.92, stdev=124.73 00:15:59.715 clat (usec): min=73, max=7224, avg=1278.42, stdev=837.56 00:15:59.715 lat (usec): min=87, max=7266, avg=1315.33, stdev=852.14 00:15:59.715 clat percentiles (usec): 00:15:59.715 | 50.000th=[ 1090], 99.000th=[ 3916], 99.900th=[ 5342], 99.990th=[ 6390], 00:15:59.715 | 99.999th=[ 7177] 00:15:59.715 bw ( KiB/s): min=48935, max=138598, per=100.00%, avg=75225.32, stdev=4356.72, samples=114 00:15:59.715 iops : min=12230, max=34649, avg=18805.16, stdev=1089.22, samples=114 00:15:59.715 lat (usec) : 100=0.03%, 250=5.80%, 500=15.76%, 750=17.63%, 1000=12.70% 00:15:59.715 lat (msec) : 2=33.89%, 4=13.55%, 10=0.64%, 20=0.01% 00:15:59.715 cpu : usr=42.77%, sys=32.15%, ctx=6334, majf=0, minf=17278 00:15:59.715 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.715 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.715 issued rwts: total=181830,185236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:59.715 00:15:59.715 Run status group 0 (all jobs): 00:15:59.715 READ: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=710MiB (745MB), run=10002-10002msec 00:15:59.715 WRITE: bw=72.3MiB/s (75.9MB/s), 72.3MiB/s-72.3MiB/s (75.9MB/s-75.9MB/s), io=724MiB (759MB), run=10002-10002msec 00:15:59.715 ----------------------------------------------------- 00:15:59.715 Suppressions used: 00:15:59.715 count bytes template 00:15:59.715 6 48 /usr/src/fio/parse.c 00:15:59.715 3288 315648 /usr/src/fio/iolog.c 00:15:59.715 1 8 libtcmalloc_minimal.so 00:15:59.715 1 904 libcrypto.so 00:15:59.715 ----------------------------------------------------- 00:15:59.715 00:15:59.715 00:15:59.715 real 0m11.918s 00:15:59.715 user 0m27.181s 00:15:59.715 sys 0m19.576s 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.715 ************************************ 00:15:59.715 END TEST bdev_fio_rw_verify 00:15:59.715 ************************************ 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:59.715 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "779fcd32-0a0b-41d2-82b4-6ffa5fe5e272"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "779fcd32-0a0b-41d2-82b4-6ffa5fe5e272",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e3a42682-f7d9-48f1-8232-07bc81e33d99"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3a42682-f7d9-48f1-8232-07bc81e33d99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ed91b765-2db3-4320-a8f6-7f1845444d34"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed91b765-2db3-4320-a8f6-7f1845444d34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "949b22b0-2688-4532-9195-db780f0d7aff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "949b22b0-2688-4532-9195-db780f0d7aff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6b63bf5e-6b0a-4378-908b-71cb97a5d6ba"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6b63bf5e-6b0a-4378-908b-71cb97a5d6ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "712291ce-a167-4209-a190-7a6fe041edc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "712291ce-a167-4209-a190-7a6fe041edc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:59.716 /home/vagrant/spdk_repo/spdk 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:59.716 00:15:59.716 real 0m12.080s 00:15:59.716 user 0m27.249s 00:15:59.716 sys 0m19.650s 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.716 20:43:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:59.716 ************************************ 00:15:59.716 END TEST bdev_fio 00:15:59.716 ************************************ 00:15:59.716 20:43:16 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:59.716 20:43:16 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:59.716 20:43:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:59.716 20:43:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.716 20:43:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.716 ************************************ 00:15:59.716 START TEST bdev_verify 00:15:59.716 ************************************ 00:15:59.716 20:43:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:59.716 [2024-12-06 20:43:16.365839] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:15:59.716 [2024-12-06 20:43:16.366024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72868 ] 00:15:59.716 [2024-12-06 20:43:16.531941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:59.716 [2024-12-06 20:43:16.652010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.716 [2024-12-06 20:43:16.652049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.977 Running I/O for 5 seconds... 00:16:02.310 24917.00 IOPS, 97.33 MiB/s [2024-12-06T20:43:20.384Z] 24144.00 IOPS, 94.31 MiB/s [2024-12-06T20:43:21.328Z] 23836.33 IOPS, 93.11 MiB/s [2024-12-06T20:43:22.270Z] 23605.00 IOPS, 92.21 MiB/s [2024-12-06T20:43:22.270Z] 23366.20 IOPS, 91.27 MiB/s 00:16:05.137 Latency(us) 00:16:05.137 [2024-12-06T20:43:22.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.137 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x0 length 0x80000 00:16:05.137 nvme0n1 : 5.06 1871.65 7.31 0.00 0.00 68261.10 7360.20 84289.38 00:16:05.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x80000 length 0x80000 00:16:05.137 nvme0n1 : 5.04 1928.67 7.53 0.00 0.00 66239.78 5343.70 76626.71 00:16:05.137 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x0 length 0x80000 00:16:05.137 nvme0n2 : 5.06 1820.35 7.11 0.00 0.00 70028.12 11494.01 74610.22 00:16:05.137 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x80000 length 0x80000 00:16:05.137 nvme0n2 : 5.05 1877.34 7.33 0.00 0.00 67905.71 7763.50 68157.44 00:16:05.137 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x0 length 0x80000 00:16:05.137 nvme0n3 : 5.06 1819.68 7.11 0.00 0.00 69904.15 9175.04 62914.56 00:16:05.137 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x80000 length 0x80000 00:16:05.137 nvme0n3 : 5.06 1846.06 7.21 0.00 0.00 68906.33 10435.35 58074.98 00:16:05.137 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x0 length 0x20000 00:16:05.137 nvme1n1 : 5.07 1818.09 7.10 0.00 0.00 69820.66 5520.15 69770.63 00:16:05.137 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x20000 length 0x20000 00:16:05.137 nvme1n1 : 5.06 1848.03 7.22 0.00 0.00 68676.92 7965.14 66140.95 00:16:05.137 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x0 length 0xa0000 00:16:05.137 nvme2n1 : 5.06 1821.91 7.12 0.00 0.00 69522.74 6452.78 91548.75 00:16:05.137 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0xa0000 length 0xa0000 00:16:05.137 nvme2n1 : 5.07 1869.96 7.30 0.00 0.00 67729.48 6553.60 66947.54 00:16:05.137 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0x0 length 0xbd0bd 00:16:05.137 nvme3n1 : 5.08 2373.98 9.27 0.00 0.00 53176.66 6251.13 67754.14 00:16:05.137 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.137 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:05.137 nvme3n1 : 5.07 2334.09 9.12 0.00 0.00 54073.25 4663.14 60091.47 00:16:05.137 [2024-12-06T20:43:22.270Z] =================================================================================================================== 00:16:05.137 [2024-12-06T20:43:22.270Z] Total : 23229.81 90.74 0.00 0.00 65619.21 4663.14 91548.75 00:16:06.081 00:16:06.081 real 0m6.728s 00:16:06.081 user 0m10.842s 00:16:06.081 sys 0m1.489s 00:16:06.081 20:43:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.081 ************************************ 00:16:06.081 END TEST bdev_verify 00:16:06.081 ************************************ 00:16:06.081 20:43:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:06.081 20:43:23 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:06.081 20:43:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:06.081 20:43:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.081 20:43:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.081 ************************************ 00:16:06.081 START TEST bdev_verify_big_io 00:16:06.081 ************************************ 00:16:06.081 20:43:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:06.081 [2024-12-06 20:43:23.162334] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:16:06.081 [2024-12-06 20:43:23.162484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72961 ] 00:16:06.342 [2024-12-06 20:43:23.327648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.342 [2024-12-06 20:43:23.462655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.342 [2024-12-06 20:43:23.462749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.939 Running I/O for 5 seconds... 00:16:11.401 200.00 IOPS, 12.50 MiB/s [2024-12-06T20:43:30.455Z] 1596.00 IOPS, 99.75 MiB/s [2024-12-06T20:43:30.455Z] 2381.33 IOPS, 148.83 MiB/s 00:16:13.322 Latency(us) 00:16:13.322 [2024-12-06T20:43:30.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.322 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x0 length 0x8000 00:16:13.322 nvme0n1 : 5.85 128.56 8.04 0.00 0.00 940335.35 11241.94 2155226.98 00:16:13.322 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x8000 length 0x8000 00:16:13.322 nvme0n1 : 5.91 121.93 7.62 0.00 0.00 1022341.58 34683.67 1051802.39 00:16:13.322 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x0 length 0x8000 00:16:13.322 nvme0n2 : 5.91 119.11 7.44 0.00 0.00 999994.58 86709.17 903388.55 00:16:13.322 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x8000 length 0x8000 00:16:13.322 nvme0n2 : 5.84 109.66 6.85 0.00 0.00 1064050.06 70980.53 909841.33 00:16:13.322 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x0 length 0x8000 00:16:13.322 nvme0n3 : 6.00 101.40 6.34 0.00 0.00 1149141.26 141961.06 1780966.01 00:16:13.322 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x8000 length 0x8000 00:16:13.322 nvme0n3 : 5.75 108.55 6.78 0.00 0.00 1057952.04 177451.32 1935832.62 00:16:13.322 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x0 length 0x2000 00:16:13.322 nvme1n1 : 6.01 124.42 7.78 0.00 0.00 912911.09 79853.10 1503496.66 00:16:13.322 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x2000 length 0x2000 00:16:13.322 nvme1n1 : 5.98 151.11 9.44 0.00 0.00 752086.94 28835.84 1484138.34 00:16:13.322 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x0 length 0xa000 00:16:13.322 nvme2n1 : 6.00 125.26 7.83 0.00 0.00 873232.07 86305.87 1587382.74 00:16:13.322 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0xa000 length 0xa000 00:16:13.322 nvme2n1 : 5.99 128.32 8.02 0.00 0.00 853977.67 86709.17 851766.35 00:16:13.322 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0x0 length 0xbd0b 00:16:13.322 nvme3n1 : 6.02 183.39 11.46 0.00 0.00 584027.45 4234.63 1284102.30 00:16:13.322 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.322 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:13.322 nvme3n1 : 6.00 165.28 10.33 0.00 0.00 649322.67 4285.05 2452054.65 00:16:13.322 [2024-12-06T20:43:30.455Z] =================================================================================================================== 00:16:13.322 [2024-12-06T20:43:30.455Z] Total : 1566.97 97.94 0.00 0.00 874968.89 4234.63 2452054.65 00:16:14.266 00:16:14.266 real 0m7.949s 00:16:14.266 user 0m14.496s 00:16:14.266 sys 0m0.494s 00:16:14.266 20:43:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.266 20:43:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.266 ************************************ 00:16:14.266 END TEST bdev_verify_big_io 00:16:14.266 ************************************ 00:16:14.266 20:43:31 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.266 20:43:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:14.266 20:43:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.266 20:43:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.266 ************************************ 00:16:14.266 START TEST bdev_write_zeroes 00:16:14.266 ************************************ 00:16:14.266 20:43:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.266 [2024-12-06 20:43:31.183792] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:16:14.266 [2024-12-06 20:43:31.184289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73072 ] 00:16:14.266 [2024-12-06 20:43:31.346921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.527 [2024-12-06 20:43:31.466108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.787 Running I/O for 1 seconds... 00:16:16.229 65536.00 IOPS, 256.00 MiB/s 00:16:16.229 Latency(us) 00:16:16.229 [2024-12-06T20:43:33.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.229 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.229 nvme0n1 : 1.02 10938.68 42.73 0.00 0.00 11689.58 6099.89 19862.45 00:16:16.229 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.229 nvme0n2 : 1.02 10674.61 41.70 0.00 0.00 11968.64 7259.37 22080.59 00:16:16.229 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.229 nvme0n3 : 1.02 10661.64 41.65 0.00 0.00 11972.27 7309.78 22080.59 00:16:16.229 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.229 nvme1n1 : 1.02 10748.31 41.99 0.00 0.00 11864.72 7208.96 22080.59 00:16:16.229 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.229 nvme2n1 : 1.02 10719.18 41.87 0.00 0.00 11886.45 7309.78 21979.77 00:16:16.229 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:16.229 nvme3n1 : 1.02 11387.23 44.48 0.00 0.00 11178.06 5268.09 20971.52 00:16:16.229 [2024-12-06T20:43:33.362Z] =================================================================================================================== 00:16:16.229 [2024-12-06T20:43:33.362Z] Total : 65129.65 254.41 0.00 0.00 11753.16 5268.09 22080.59 00:16:16.797 00:16:16.797 real 0m2.611s 00:16:16.797 user 0m1.930s 00:16:16.797 sys 0m0.484s 00:16:16.797 ************************************ 00:16:16.797 END TEST bdev_write_zeroes 00:16:16.797 ************************************ 00:16:16.797 20:43:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.797 20:43:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 20:43:33 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.797 20:43:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:16.797 20:43:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.797 20:43:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 ************************************ 00:16:16.797 START TEST bdev_json_nonenclosed 00:16:16.797 ************************************ 00:16:16.797 20:43:33 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.797 [2024-12-06 20:43:33.859733] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:16:16.797 [2024-12-06 20:43:33.859879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73125 ] 00:16:17.056 [2024-12-06 20:43:34.023948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.056 [2024-12-06 20:43:34.142671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.056 [2024-12-06 20:43:34.142776] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:17.056 [2024-12-06 20:43:34.142796] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:17.056 [2024-12-06 20:43:34.142806] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:17.315 00:16:17.316 real 0m0.546s 00:16:17.316 user 0m0.314s 00:16:17.316 sys 0m0.126s 00:16:17.316 20:43:34 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.316 ************************************ 00:16:17.316 END TEST bdev_json_nonenclosed 00:16:17.316 ************************************ 00:16:17.316 20:43:34 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:17.316 20:43:34 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.316 20:43:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:17.316 20:43:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.316 20:43:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.316 ************************************ 00:16:17.316 START TEST bdev_json_nonarray 00:16:17.316 ************************************ 00:16:17.316 20:43:34 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:17.574 [2024-12-06 20:43:34.474283] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:16:17.574 [2024-12-06 20:43:34.474611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73145 ] 00:16:17.574 [2024-12-06 20:43:34.638858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.834 [2024-12-06 20:43:34.763487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.834 [2024-12-06 20:43:34.763596] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:17.834 [2024-12-06 20:43:34.763617] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:17.834 [2024-12-06 20:43:34.763627] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:17.834 00:16:17.834 real 0m0.555s 00:16:17.834 user 0m0.324s 00:16:17.834 sys 0m0.124s 00:16:17.834 ************************************ 00:16:17.834 END TEST bdev_json_nonarray 00:16:17.834 ************************************ 00:16:17.834 20:43:34 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.834 20:43:34 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:18.095 20:43:35 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:18.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:50.842 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:57.428 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:57.428 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:57.428 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:57.428 00:16:57.428 real 1m29.526s 00:16:57.428 user 1m23.800s 00:16:57.428 sys 1m33.495s 00:16:57.428 20:44:13 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.428 ************************************ 00:16:57.428 END TEST blockdev_xnvme 00:16:57.428 ************************************ 00:16:57.428 20:44:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.428 20:44:14 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:57.428 20:44:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.428 20:44:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.428 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:16:57.428 ************************************ 00:16:57.428 START TEST ublk 00:16:57.428 ************************************ 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:57.428 * Looking for test storage... 00:16:57.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.428 20:44:14 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.428 20:44:14 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.428 20:44:14 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.428 20:44:14 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.428 20:44:14 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.428 20:44:14 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:57.428 20:44:14 ublk -- scripts/common.sh@345 -- # : 1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.428 20:44:14 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.428 20:44:14 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@353 -- # local d=1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.428 20:44:14 ublk -- scripts/common.sh@355 -- # echo 1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.428 20:44:14 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@353 -- # local d=2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.428 20:44:14 ublk -- scripts/common.sh@355 -- # echo 2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.428 20:44:14 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.428 20:44:14 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.428 20:44:14 ublk -- scripts/common.sh@368 -- # return 0 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.428 --rc genhtml_branch_coverage=1 00:16:57.428 --rc genhtml_function_coverage=1 00:16:57.428 --rc genhtml_legend=1 00:16:57.428 --rc geninfo_all_blocks=1 00:16:57.428 --rc geninfo_unexecuted_blocks=1 00:16:57.428 00:16:57.428 ' 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.428 --rc genhtml_branch_coverage=1 00:16:57.428 --rc genhtml_function_coverage=1 00:16:57.428 --rc genhtml_legend=1 00:16:57.428 --rc geninfo_all_blocks=1 00:16:57.428 --rc geninfo_unexecuted_blocks=1 00:16:57.428 00:16:57.428 ' 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.428 --rc genhtml_branch_coverage=1 00:16:57.428 --rc genhtml_function_coverage=1 00:16:57.428 --rc genhtml_legend=1 00:16:57.428 --rc geninfo_all_blocks=1 00:16:57.428 --rc geninfo_unexecuted_blocks=1 00:16:57.428 00:16:57.428 ' 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.428 --rc genhtml_branch_coverage=1 00:16:57.428 --rc genhtml_function_coverage=1 00:16:57.428 --rc genhtml_legend=1 00:16:57.428 --rc geninfo_all_blocks=1 00:16:57.428 --rc geninfo_unexecuted_blocks=1 00:16:57.428 00:16:57.428 ' 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:57.428 20:44:14 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:57.428 20:44:14 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:57.428 20:44:14 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:57.428 20:44:14 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:57.428 20:44:14 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:57.428 20:44:14 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:57.428 20:44:14 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:57.428 20:44:14 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:57.428 20:44:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.428 20:44:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.428 ************************************ 00:16:57.428 START TEST test_save_ublk_config 00:16:57.428 ************************************ 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73457 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73457 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73457 ']' 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.428 20:44:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:57.428 [2024-12-06 20:44:14.267964] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:16:57.428 [2024-12-06 20:44:14.268083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73457 ] 00:16:57.428 [2024-12-06 20:44:14.427706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.428 [2024-12-06 20:44:14.529782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:58.368 [2024-12-06 20:44:15.241916] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:58.368 [2024-12-06 20:44:15.242811] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:58.368 malloc0 00:16:58.368 [2024-12-06 20:44:15.313053] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:58.368 [2024-12-06 20:44:15.313151] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:58.368 [2024-12-06 20:44:15.313161] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:58.368 [2024-12-06 20:44:15.313173] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:58.368 [2024-12-06 20:44:15.320954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:58.368 [2024-12-06 20:44:15.320986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:58.368 [2024-12-06 20:44:15.328933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:58.368 [2024-12-06 20:44:15.329056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:58.368 [2024-12-06 20:44:15.345920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:58.368 0 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.368 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:58.643 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.643 20:44:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:58.643 "subsystems": [ 00:16:58.643 { 00:16:58.643 "subsystem": "fsdev", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "fsdev_set_opts", 00:16:58.643 "params": { 00:16:58.643 "fsdev_io_pool_size": 65535, 00:16:58.643 "fsdev_io_cache_size": 256 00:16:58.643 } 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "keyring", 00:16:58.643 "config": [] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "iobuf", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "iobuf_set_options", 00:16:58.643 "params": { 00:16:58.643 "small_pool_count": 8192, 00:16:58.643 "large_pool_count": 1024, 00:16:58.643 "small_bufsize": 8192, 00:16:58.643 "large_bufsize": 135168, 00:16:58.643 "enable_numa": false 00:16:58.643 } 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "sock", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "sock_set_default_impl", 00:16:58.643 "params": { 00:16:58.643 "impl_name": "posix" 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "sock_impl_set_options", 00:16:58.643 "params": { 00:16:58.643 "impl_name": "ssl", 00:16:58.643 "recv_buf_size": 4096, 00:16:58.643 "send_buf_size": 4096, 00:16:58.643 "enable_recv_pipe": true, 00:16:58.643 "enable_quickack": false, 00:16:58.643 "enable_placement_id": 0, 00:16:58.643 "enable_zerocopy_send_server": true, 00:16:58.643 "enable_zerocopy_send_client": false, 00:16:58.643 "zerocopy_threshold": 0, 00:16:58.643 "tls_version": 0, 00:16:58.643 "enable_ktls": false 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "sock_impl_set_options", 00:16:58.643 "params": { 00:16:58.643 "impl_name": "posix", 00:16:58.643 "recv_buf_size": 2097152, 00:16:58.643 "send_buf_size": 2097152, 00:16:58.643 "enable_recv_pipe": true, 00:16:58.643 "enable_quickack": false, 00:16:58.643 "enable_placement_id": 0, 00:16:58.643 "enable_zerocopy_send_server": true, 00:16:58.643 "enable_zerocopy_send_client": false, 00:16:58.643 "zerocopy_threshold": 0, 00:16:58.643 "tls_version": 0, 00:16:58.643 "enable_ktls": false 00:16:58.643 } 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "vmd", 00:16:58.643 "config": [] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "accel", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "accel_set_options", 00:16:58.643 "params": { 00:16:58.643 "small_cache_size": 128, 00:16:58.643 "large_cache_size": 16, 00:16:58.643 "task_count": 2048, 00:16:58.643 "sequence_count": 2048, 00:16:58.643 "buf_count": 2048 00:16:58.643 } 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "bdev", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "bdev_set_options", 00:16:58.643 "params": { 00:16:58.643 "bdev_io_pool_size": 65535, 00:16:58.643 "bdev_io_cache_size": 256, 00:16:58.643 "bdev_auto_examine": true, 00:16:58.643 "iobuf_small_cache_size": 128, 00:16:58.643 "iobuf_large_cache_size": 16 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "bdev_raid_set_options", 00:16:58.643 "params": { 00:16:58.643 "process_window_size_kb": 1024, 00:16:58.643 "process_max_bandwidth_mb_sec": 0 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "bdev_iscsi_set_options", 00:16:58.643 "params": { 00:16:58.643 "timeout_sec": 30 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "bdev_nvme_set_options", 00:16:58.643 "params": { 00:16:58.643 "action_on_timeout": "none", 00:16:58.643 "timeout_us": 0, 00:16:58.643 "timeout_admin_us": 0, 00:16:58.643 "keep_alive_timeout_ms": 10000, 00:16:58.643 "arbitration_burst": 0, 00:16:58.643 "low_priority_weight": 0, 00:16:58.643 "medium_priority_weight": 0, 00:16:58.643 "high_priority_weight": 0, 00:16:58.643 "nvme_adminq_poll_period_us": 10000, 00:16:58.643 "nvme_ioq_poll_period_us": 0, 00:16:58.643 "io_queue_requests": 0, 00:16:58.643 "delay_cmd_submit": true, 00:16:58.643 "transport_retry_count": 4, 00:16:58.643 "bdev_retry_count": 3, 00:16:58.643 "transport_ack_timeout": 0, 00:16:58.643 "ctrlr_loss_timeout_sec": 0, 00:16:58.643 "reconnect_delay_sec": 0, 00:16:58.643 "fast_io_fail_timeout_sec": 0, 00:16:58.643 "disable_auto_failback": false, 00:16:58.643 "generate_uuids": false, 00:16:58.643 "transport_tos": 0, 00:16:58.643 "nvme_error_stat": false, 00:16:58.643 "rdma_srq_size": 0, 00:16:58.643 "io_path_stat": false, 00:16:58.643 "allow_accel_sequence": false, 00:16:58.643 "rdma_max_cq_size": 0, 00:16:58.643 "rdma_cm_event_timeout_ms": 0, 00:16:58.643 "dhchap_digests": [ 00:16:58.643 "sha256", 00:16:58.643 "sha384", 00:16:58.643 "sha512" 00:16:58.643 ], 00:16:58.643 "dhchap_dhgroups": [ 00:16:58.643 "null", 00:16:58.643 "ffdhe2048", 00:16:58.643 "ffdhe3072", 00:16:58.643 "ffdhe4096", 00:16:58.643 "ffdhe6144", 00:16:58.643 "ffdhe8192" 00:16:58.643 ] 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "bdev_nvme_set_hotplug", 00:16:58.643 "params": { 00:16:58.643 "period_us": 100000, 00:16:58.643 "enable": false 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "bdev_malloc_create", 00:16:58.643 "params": { 00:16:58.643 "name": "malloc0", 00:16:58.643 "num_blocks": 8192, 00:16:58.643 "block_size": 4096, 00:16:58.643 "physical_block_size": 4096, 00:16:58.643 "uuid": "8b282b05-1994-4144-ae8d-6f8f55ba504f", 00:16:58.643 "optimal_io_boundary": 0, 00:16:58.643 "md_size": 0, 00:16:58.643 "dif_type": 0, 00:16:58.643 "dif_is_head_of_md": false, 00:16:58.643 "dif_pi_format": 0 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "bdev_wait_for_examine" 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "scsi", 00:16:58.643 "config": null 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "scheduler", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "framework_set_scheduler", 00:16:58.643 "params": { 00:16:58.643 "name": "static" 00:16:58.643 } 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "vhost_scsi", 00:16:58.643 "config": [] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "vhost_blk", 00:16:58.643 "config": [] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "ublk", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "ublk_create_target", 00:16:58.643 "params": { 00:16:58.643 "cpumask": "1" 00:16:58.643 } 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "method": "ublk_start_disk", 00:16:58.643 "params": { 00:16:58.643 "bdev_name": "malloc0", 00:16:58.643 "ublk_id": 0, 00:16:58.643 "num_queues": 1, 00:16:58.643 "queue_depth": 128 00:16:58.643 } 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "nbd", 00:16:58.643 "config": [] 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "subsystem": "nvmf", 00:16:58.643 "config": [ 00:16:58.643 { 00:16:58.643 "method": "nvmf_set_config", 00:16:58.644 "params": { 00:16:58.644 "discovery_filter": "match_any", 00:16:58.644 "admin_cmd_passthru": { 00:16:58.644 "identify_ctrlr": false 00:16:58.644 }, 00:16:58.644 "dhchap_digests": [ 00:16:58.644 "sha256", 00:16:58.644 "sha384", 00:16:58.644 "sha512" 00:16:58.644 ], 00:16:58.644 "dhchap_dhgroups": [ 00:16:58.644 "null", 00:16:58.644 "ffdhe2048", 00:16:58.644 "ffdhe3072", 00:16:58.644 "ffdhe4096", 00:16:58.644 "ffdhe6144", 00:16:58.644 "ffdhe8192" 00:16:58.644 ] 00:16:58.644 } 00:16:58.644 }, 00:16:58.644 { 00:16:58.644 "method": "nvmf_set_max_subsystems", 00:16:58.644 "params": { 00:16:58.644 "max_subsystems": 1024 00:16:58.644 } 00:16:58.644 }, 00:16:58.644 { 00:16:58.644 "method": "nvmf_set_crdt", 00:16:58.644 "params": { 00:16:58.644 "crdt1": 0, 00:16:58.644 "crdt2": 0, 00:16:58.644 "crdt3": 0 00:16:58.644 } 00:16:58.644 } 00:16:58.644 ] 00:16:58.644 }, 00:16:58.644 { 00:16:58.644 "subsystem": "iscsi", 00:16:58.644 "config": [ 00:16:58.644 { 00:16:58.644 "method": "iscsi_set_options", 00:16:58.644 "params": { 00:16:58.644 "node_base": "iqn.2016-06.io.spdk", 00:16:58.644 "max_sessions": 128, 00:16:58.644 "max_connections_per_session": 2, 00:16:58.644 "max_queue_depth": 64, 00:16:58.644 "default_time2wait": 2, 00:16:58.644 "default_time2retain": 20, 00:16:58.644 "first_burst_length": 8192, 00:16:58.644 "immediate_data": true, 00:16:58.644 "allow_duplicated_isid": false, 00:16:58.644 "error_recovery_level": 0, 00:16:58.644 "nop_timeout": 60, 00:16:58.644 "nop_in_interval": 30, 00:16:58.644 "disable_chap": false, 00:16:58.644 "require_chap": false, 00:16:58.644 "mutual_chap": false, 00:16:58.644 "chap_group": 0, 00:16:58.644 "max_large_datain_per_connection": 64, 00:16:58.644 "max_r2t_per_connection": 4, 00:16:58.644 "pdu_pool_size": 36864, 00:16:58.644 "immediate_data_pool_size": 16384, 00:16:58.644 "data_out_pool_size": 2048 00:16:58.644 } 00:16:58.644 } 00:16:58.644 ] 00:16:58.644 } 00:16:58.644 ] 00:16:58.644 }' 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73457 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73457 ']' 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73457 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73457 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73457' 00:16:58.644 killing process with pid 73457 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73457 00:16:58.644 20:44:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73457 00:17:00.031 [2024-12-06 20:44:16.762396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:00.031 [2024-12-06 20:44:16.798938] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:00.031 [2024-12-06 20:44:16.799097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:00.031 [2024-12-06 20:44:16.808951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:00.031 [2024-12-06 20:44:16.809015] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:00.031 [2024-12-06 20:44:16.809030] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:00.031 [2024-12-06 20:44:16.809060] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:00.031 [2024-12-06 20:44:16.809217] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:00.978 20:44:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73512 00:17:00.978 20:44:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73512 00:17:00.978 20:44:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73512 ']' 00:17:00.978 20:44:18 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.978 20:44:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.978 20:44:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.979 20:44:18 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.979 20:44:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:00.979 20:44:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:00.979 20:44:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:00.979 "subsystems": [ 00:17:00.979 { 00:17:00.979 "subsystem": "fsdev", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "fsdev_set_opts", 00:17:00.979 "params": { 00:17:00.979 "fsdev_io_pool_size": 65535, 00:17:00.979 "fsdev_io_cache_size": 256 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "keyring", 00:17:00.979 "config": [] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "iobuf", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "iobuf_set_options", 00:17:00.979 "params": { 00:17:00.979 "small_pool_count": 8192, 00:17:00.979 "large_pool_count": 1024, 00:17:00.979 "small_bufsize": 8192, 00:17:00.979 "large_bufsize": 135168, 00:17:00.979 "enable_numa": false 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "sock", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "sock_set_default_impl", 00:17:00.979 "params": { 00:17:00.979 "impl_name": "posix" 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "sock_impl_set_options", 00:17:00.979 "params": { 00:17:00.979 "impl_name": "ssl", 00:17:00.979 "recv_buf_size": 4096, 00:17:00.979 "send_buf_size": 4096, 00:17:00.979 "enable_recv_pipe": true, 00:17:00.979 "enable_quickack": false, 00:17:00.979 "enable_placement_id": 0, 00:17:00.979 "enable_zerocopy_send_server": true, 00:17:00.979 "enable_zerocopy_send_client": false, 00:17:00.979 "zerocopy_threshold": 0, 00:17:00.979 "tls_version": 0, 00:17:00.979 "enable_ktls": false 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "sock_impl_set_options", 00:17:00.979 "params": { 00:17:00.979 "impl_name": "posix", 00:17:00.979 "recv_buf_size": 2097152, 00:17:00.979 "send_buf_size": 2097152, 00:17:00.979 "enable_recv_pipe": true, 00:17:00.979 "enable_quickack": false, 00:17:00.979 "enable_placement_id": 0, 00:17:00.979 "enable_zerocopy_send_server": true, 00:17:00.979 "enable_zerocopy_send_client": false, 00:17:00.979 "zerocopy_threshold": 0, 00:17:00.979 "tls_version": 0, 00:17:00.979 "enable_ktls": false 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "vmd", 00:17:00.979 "config": [] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "accel", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "accel_set_options", 00:17:00.979 "params": { 00:17:00.979 "small_cache_size": 128, 00:17:00.979 "large_cache_size": 16, 00:17:00.979 "task_count": 2048, 00:17:00.979 "sequence_count": 2048, 00:17:00.979 "buf_count": 2048 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "bdev", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "bdev_set_options", 00:17:00.979 "params": { 00:17:00.979 "bdev_io_pool_size": 65535, 00:17:00.979 "bdev_io_cache_size": 256, 00:17:00.979 "bdev_auto_examine": true, 00:17:00.979 "iobuf_small_cache_size": 128, 00:17:00.979 "iobuf_large_cache_size": 16 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "bdev_raid_set_options", 00:17:00.979 "params": { 00:17:00.979 "process_window_size_kb": 1024, 00:17:00.979 "process_max_bandwidth_mb_sec": 0 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "bdev_iscsi_set_options", 00:17:00.979 "params": { 00:17:00.979 "timeout_sec": 30 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "bdev_nvme_set_options", 00:17:00.979 "params": { 00:17:00.979 "action_on_timeout": "none", 00:17:00.979 "timeout_us": 0, 00:17:00.979 "timeout_admin_us": 0, 00:17:00.979 "keep_alive_timeout_ms": 10000, 00:17:00.979 "arbitration_burst": 0, 00:17:00.979 "low_priority_weight": 0, 00:17:00.979 "medium_priority_weight": 0, 00:17:00.979 "high_priority_weight": 0, 00:17:00.979 "nvme_adminq_poll_period_us": 10000, 00:17:00.979 "nvme_ioq_poll_period_us": 0, 00:17:00.979 "io_queue_requests": 0, 00:17:00.979 "delay_cmd_submit": true, 00:17:00.979 "transport_retry_count": 4, 00:17:00.979 "bdev_retry_count": 3, 00:17:00.979 "transport_ack_timeout": 0, 00:17:00.979 "ctrlr_loss_timeout_sec": 0, 00:17:00.979 "reconnect_delay_sec": 0, 00:17:00.979 "fast_io_fail_timeout_sec": 0, 00:17:00.979 "disable_auto_failback": false, 00:17:00.979 "generate_uuids": false, 00:17:00.979 "transport_tos": 0, 00:17:00.979 "nvme_error_stat": false, 00:17:00.979 "rdma_srq_size": 0, 00:17:00.979 "io_path_stat": false, 00:17:00.979 "allow_accel_sequence": false, 00:17:00.979 "rdma_max_cq_size": 0, 00:17:00.979 "rdma_cm_event_timeout_ms": 0, 00:17:00.979 "dhchap_digests": [ 00:17:00.979 "sha256", 00:17:00.979 "sha384", 00:17:00.979 "sha512" 00:17:00.979 ], 00:17:00.979 "dhchap_dhgroups": [ 00:17:00.979 "null", 00:17:00.979 "ffdhe2048", 00:17:00.979 "ffdhe3072", 00:17:00.979 "ffdhe4096", 00:17:00.979 "ffdhe6144", 00:17:00.979 "ffdhe8192" 00:17:00.979 ] 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "bdev_nvme_set_hotplug", 00:17:00.979 "params": { 00:17:00.979 "period_us": 100000, 00:17:00.979 "enable": false 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "bdev_malloc_create", 00:17:00.979 "params": { 00:17:00.979 "name": "malloc0", 00:17:00.979 "num_blocks": 8192, 00:17:00.979 "block_size": 4096, 00:17:00.979 "physical_block_size": 4096, 00:17:00.979 "uuid": "8b282b05-1994-4144-ae8d-6f8f55ba504f", 00:17:00.979 "optimal_io_boundary": 0, 00:17:00.979 "md_size": 0, 00:17:00.979 "dif_type": 0, 00:17:00.979 "dif_is_head_of_md": false, 00:17:00.979 "dif_pi_format": 0 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "bdev_wait_for_examine" 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "scsi", 00:17:00.979 "config": null 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "scheduler", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "framework_set_scheduler", 00:17:00.979 "params": { 00:17:00.979 "name": "static" 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "vhost_scsi", 00:17:00.979 "config": [] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "vhost_blk", 00:17:00.979 "config": [] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "ublk", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "ublk_create_target", 00:17:00.979 "params": { 00:17:00.979 "cpumask": "1" 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "ublk_start_disk", 00:17:00.979 "params": { 00:17:00.979 "bdev_name": "malloc0", 00:17:00.979 "ublk_id": 0, 00:17:00.979 "num_queues": 1, 00:17:00.979 "queue_depth": 128 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "nbd", 00:17:00.979 "config": [] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "nvmf", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "nvmf_set_config", 00:17:00.979 "params": { 00:17:00.979 "discovery_filter": "match_any", 00:17:00.979 "admin_cmd_passthru": { 00:17:00.979 "identify_ctrlr": false 00:17:00.979 }, 00:17:00.979 "dhchap_digests": [ 00:17:00.979 "sha256", 00:17:00.979 "sha384", 00:17:00.979 "sha512" 00:17:00.979 ], 00:17:00.979 "dhchap_dhgroups": [ 00:17:00.979 "null", 00:17:00.979 "ffdhe2048", 00:17:00.979 "ffdhe3072", 00:17:00.979 "ffdhe4096", 00:17:00.979 "ffdhe6144", 00:17:00.979 "ffdhe8192" 00:17:00.979 ] 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "nvmf_set_max_subsystems", 00:17:00.979 "params": { 00:17:00.979 "max_subsystems": 1024 00:17:00.979 } 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "method": "nvmf_set_crdt", 00:17:00.979 "params": { 00:17:00.979 "crdt1": 0, 00:17:00.979 "crdt2": 0, 00:17:00.979 "crdt3": 0 00:17:00.979 } 00:17:00.979 } 00:17:00.979 ] 00:17:00.979 }, 00:17:00.979 { 00:17:00.979 "subsystem": "iscsi", 00:17:00.979 "config": [ 00:17:00.979 { 00:17:00.979 "method": "iscsi_set_options", 00:17:00.979 "params": { 00:17:00.979 "node_base": "iqn.2016-06.io.spdk", 00:17:00.980 "max_sessions": 128, 00:17:00.980 "max_connections_per_session": 2, 00:17:00.980 "max_queue_depth": 64, 00:17:00.980 "default_time2wait": 2, 00:17:00.980 "default_time2retain": 20, 00:17:00.980 "first_burst_length": 8192, 00:17:00.980 "immediate_data": true, 00:17:00.980 "allow_duplicated_isid": false, 00:17:00.980 "error_recovery_level": 0, 00:17:00.980 "nop_timeout": 60, 00:17:00.980 "nop_in_interval": 30, 00:17:00.980 "disable_chap": false, 00:17:00.980 "require_chap": false, 00:17:00.980 "mutual_chap": false, 00:17:00.980 "chap_group": 0, 00:17:00.980 "max_large_datain_per_connection": 64, 00:17:00.980 "max_r2t_per_connection": 4, 00:17:00.980 "pdu_pool_size": 36864, 00:17:00.980 "immediate_data_pool_size": 16384, 00:17:00.980 "data_out_pool_size": 2048 00:17:00.980 } 00:17:00.980 } 00:17:00.980 ] 00:17:00.980 } 00:17:00.980 ] 00:17:00.980 }' 00:17:01.242 [2024-12-06 20:44:18.173029] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:17:01.242 [2024-12-06 20:44:18.173148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73512 ] 00:17:01.242 [2024-12-06 20:44:18.328546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.503 [2024-12-06 20:44:18.407331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.075 [2024-12-06 20:44:19.059904] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:02.075 [2024-12-06 20:44:19.060560] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:02.075 [2024-12-06 20:44:19.067988] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:02.075 [2024-12-06 20:44:19.068046] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:02.075 [2024-12-06 20:44:19.068053] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:02.075 [2024-12-06 20:44:19.068059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:02.075 [2024-12-06 20:44:19.076959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:02.075 [2024-12-06 20:44:19.076976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:02.075 [2024-12-06 20:44:19.083908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:02.075 [2024-12-06 20:44:19.083975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:02.075 [2024-12-06 20:44:19.100901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73512 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73512 ']' 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73512 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73512 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.075 killing process with pid 73512 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73512' 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73512 00:17:02.075 20:44:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73512 00:17:03.524 [2024-12-06 20:44:20.188983] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.524 [2024-12-06 20:44:20.227955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:03.524 [2024-12-06 20:44:20.228065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:03.524 [2024-12-06 20:44:20.236919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:03.524 [2024-12-06 20:44:20.236958] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:03.524 [2024-12-06 20:44:20.236963] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:03.524 [2024-12-06 20:44:20.236981] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:03.524 [2024-12-06 20:44:20.237088] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:04.480 20:44:21 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:04.480 00:17:04.480 real 0m7.240s 00:17:04.480 user 0m4.953s 00:17:04.480 sys 0m2.844s 00:17:04.480 20:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.480 20:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:04.480 ************************************ 00:17:04.480 END TEST test_save_ublk_config 00:17:04.480 ************************************ 00:17:04.480 20:44:21 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73582 00:17:04.480 20:44:21 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.480 20:44:21 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73582 00:17:04.480 20:44:21 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:04.480 20:44:21 ublk -- common/autotest_common.sh@835 -- # '[' -z 73582 ']' 00:17:04.480 20:44:21 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.480 20:44:21 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.480 20:44:21 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.480 20:44:21 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.480 20:44:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.480 [2024-12-06 20:44:21.557600] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:17:04.480 [2024-12-06 20:44:21.557721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73582 ] 00:17:04.740 [2024-12-06 20:44:21.714784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.740 [2024-12-06 20:44:21.838126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.740 [2024-12-06 20:44:21.838227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.689 20:44:22 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.689 20:44:22 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:05.689 20:44:22 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:05.689 20:44:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.689 20:44:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.689 20:44:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.689 ************************************ 00:17:05.689 START TEST test_create_ublk 00:17:05.689 ************************************ 00:17:05.689 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:05.689 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:05.689 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.689 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.689 [2024-12-06 20:44:22.578916] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:05.689 [2024-12-06 20:44:22.581254] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:05.689 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.689 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:05.689 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:05.689 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.689 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.951 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:05.951 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.951 [2024-12-06 20:44:22.865170] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:05.951 [2024-12-06 20:44:22.865709] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:05.951 [2024-12-06 20:44:22.865738] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:05.951 [2024-12-06 20:44:22.865750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.951 [2024-12-06 20:44:22.873375] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.951 [2024-12-06 20:44:22.873412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.951 [2024-12-06 20:44:22.880953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.951 [2024-12-06 20:44:22.881673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:05.951 [2024-12-06 20:44:22.900933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.951 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:05.951 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:05.951 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.951 20:44:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.952 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:05.952 { 00:17:05.952 "ublk_device": "/dev/ublkb0", 00:17:05.952 "id": 0, 00:17:05.952 "queue_depth": 512, 00:17:05.952 "num_queues": 4, 00:17:05.952 "bdev_name": "Malloc0" 00:17:05.952 } 00:17:05.952 ]' 00:17:05.952 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:05.952 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:05.952 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:05.952 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:05.952 20:44:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:05.952 20:44:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:05.952 20:44:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:05.952 20:44:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:05.952 20:44:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:05.952 20:44:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:05.952 20:44:23 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:05.952 20:44:23 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:06.214 fio: verification read phase will never start because write phase uses all of runtime 00:17:06.214 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:06.214 fio-3.35 00:17:06.214 Starting 1 process 00:17:16.225 00:17:16.225 fio_test: (groupid=0, jobs=1): err= 0: pid=73632: Fri Dec 6 20:44:33 2024 00:17:16.225 write: IOPS=19.7k, BW=76.8MiB/s (80.6MB/s)(768MiB/10001msec); 0 zone resets 00:17:16.225 clat (usec): min=33, max=3967, avg=50.04, stdev=82.57 00:17:16.225 lat (usec): min=34, max=3967, avg=50.50, stdev=82.59 00:17:16.225 clat percentiles (usec): 00:17:16.225 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:17:16.225 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:17:16.225 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 57], 95.00th=[ 63], 00:17:16.225 | 99.00th=[ 77], 99.50th=[ 88], 99.90th=[ 1270], 99.95th=[ 2474], 00:17:16.225 | 99.99th=[ 3490] 00:17:16.225 bw ( KiB/s): min=59528, max=83528, per=99.82%, avg=78524.37, stdev=6003.87, samples=19 00:17:16.225 iops : min=14882, max=20882, avg=19631.05, stdev=1500.94, samples=19 00:17:16.225 lat (usec) : 50=81.54%, 100=18.07%, 250=0.22%, 500=0.03%, 750=0.01% 00:17:16.225 lat (usec) : 1000=0.01% 00:17:16.225 lat (msec) : 2=0.05%, 4=0.07% 00:17:16.225 cpu : usr=3.63%, sys=16.72%, ctx=196663, majf=0, minf=796 00:17:16.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.225 issued rwts: total=0,196677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.225 00:17:16.225 Run status group 0 (all jobs): 00:17:16.225 WRITE: bw=76.8MiB/s (80.6MB/s), 76.8MiB/s-76.8MiB/s (80.6MB/s-80.6MB/s), io=768MiB (806MB), run=10001-10001msec 00:17:16.225 00:17:16.225 Disk stats (read/write): 00:17:16.225 ublkb0: ios=0/194557, merge=0/0, ticks=0/7969, in_queue=7970, util=99.08% 00:17:16.225 20:44:33 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:16.225 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.225 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.225 [2024-12-06 20:44:33.317083] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:16.484 [2024-12-06 20:44:33.356941] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:16.484 [2024-12-06 20:44:33.357525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:16.484 [2024-12-06 20:44:33.365933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:16.484 [2024-12-06 20:44:33.366174] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:16.484 [2024-12-06 20:44:33.366188] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.484 20:44:33 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.484 [2024-12-06 20:44:33.380963] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:16.484 request: 00:17:16.484 { 00:17:16.484 "ublk_id": 0, 00:17:16.484 "method": "ublk_stop_disk", 00:17:16.484 "req_id": 1 00:17:16.484 } 00:17:16.484 Got JSON-RPC error response 00:17:16.484 response: 00:17:16.484 { 00:17:16.484 "code": -19, 00:17:16.484 "message": "No such device" 00:17:16.484 } 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.484 20:44:33 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.484 [2024-12-06 20:44:33.396977] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:16.484 [2024-12-06 20:44:33.400599] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:16.484 [2024-12-06 20:44:33.400629] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.484 20:44:33 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.484 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.743 20:44:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:16.743 ************************************ 00:17:16.743 END TEST test_create_ublk 00:17:16.743 ************************************ 00:17:16.743 20:44:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:16.743 00:17:16.743 real 0m11.284s 00:17:16.743 user 0m0.657s 00:17:16.743 sys 0m1.753s 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.743 20:44:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 20:44:33 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:17.091 20:44:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.091 20:44:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.091 20:44:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 ************************************ 00:17:17.091 START TEST test_create_multi_ublk 00:17:17.091 ************************************ 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 [2024-12-06 20:44:33.903906] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:17.091 [2024-12-06 20:44:33.905420] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.091 20:44:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 [2024-12-06 20:44:34.107001] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:17.091 [2024-12-06 20:44:34.107292] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:17.091 [2024-12-06 20:44:34.107304] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:17.091 [2024-12-06 20:44:34.107312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.091 [2024-12-06 20:44:34.130909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.091 [2024-12-06 20:44:34.130930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.091 [2024-12-06 20:44:34.142909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.091 [2024-12-06 20:44:34.143409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:17.091 [2024-12-06 20:44:34.182911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.091 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 [2024-12-06 20:44:34.402998] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:17.365 [2024-12-06 20:44:34.403286] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:17.365 [2024-12-06 20:44:34.403299] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:17.365 [2024-12-06 20:44:34.403304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.365 [2024-12-06 20:44:34.410921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.365 [2024-12-06 20:44:34.410938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.365 [2024-12-06 20:44:34.418909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.365 [2024-12-06 20:44:34.419388] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:17.365 [2024-12-06 20:44:34.427931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.365 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 [2024-12-06 20:44:34.587001] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:17.624 [2024-12-06 20:44:34.587296] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:17.624 [2024-12-06 20:44:34.587308] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:17.624 [2024-12-06 20:44:34.587315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.624 [2024-12-06 20:44:34.594922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.624 [2024-12-06 20:44:34.594942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.624 [2024-12-06 20:44:34.602914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.624 [2024-12-06 20:44:34.603410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:17.624 [2024-12-06 20:44:34.611930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.624 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.885 [2024-12-06 20:44:34.771007] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:17.885 [2024-12-06 20:44:34.771297] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:17.885 [2024-12-06 20:44:34.771307] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:17.885 [2024-12-06 20:44:34.771313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.885 [2024-12-06 20:44:34.778918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.885 [2024-12-06 20:44:34.778933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.885 [2024-12-06 20:44:34.786915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.885 [2024-12-06 20:44:34.787405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:17.885 [2024-12-06 20:44:34.795937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:17.885 { 00:17:17.885 "ublk_device": "/dev/ublkb0", 00:17:17.885 "id": 0, 00:17:17.885 "queue_depth": 512, 00:17:17.885 "num_queues": 4, 00:17:17.885 "bdev_name": "Malloc0" 00:17:17.885 }, 00:17:17.885 { 00:17:17.885 "ublk_device": "/dev/ublkb1", 00:17:17.885 "id": 1, 00:17:17.885 "queue_depth": 512, 00:17:17.885 "num_queues": 4, 00:17:17.885 "bdev_name": "Malloc1" 00:17:17.885 }, 00:17:17.885 { 00:17:17.885 "ublk_device": "/dev/ublkb2", 00:17:17.885 "id": 2, 00:17:17.885 "queue_depth": 512, 00:17:17.885 "num_queues": 4, 00:17:17.885 "bdev_name": "Malloc2" 00:17:17.885 }, 00:17:17.885 { 00:17:17.885 "ublk_device": "/dev/ublkb3", 00:17:17.885 "id": 3, 00:17:17.885 "queue_depth": 512, 00:17:17.885 "num_queues": 4, 00:17:17.885 "bdev_name": "Malloc3" 00:17:17.885 } 00:17:17.885 ]' 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.885 20:44:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:18.144 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.403 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.403 [2024-12-06 20:44:35.474979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:18.404 [2024-12-06 20:44:35.520906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:18.404 [2024-12-06 20:44:35.521614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:18.404 [2024-12-06 20:44:35.531937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:18.404 [2024-12-06 20:44:35.532157] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:18.404 [2024-12-06 20:44:35.532167] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.663 [2024-12-06 20:44:35.546977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:18.663 [2024-12-06 20:44:35.588935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:18.663 [2024-12-06 20:44:35.589581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:18.663 [2024-12-06 20:44:35.596912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:18.663 [2024-12-06 20:44:35.597132] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:18.663 [2024-12-06 20:44:35.597146] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.663 [2024-12-06 20:44:35.612977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:18.663 [2024-12-06 20:44:35.655937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:18.663 [2024-12-06 20:44:35.656545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:18.663 [2024-12-06 20:44:35.663911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:18.663 [2024-12-06 20:44:35.664130] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:18.663 [2024-12-06 20:44:35.664142] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.663 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.663 [2024-12-06 20:44:35.676966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:18.663 [2024-12-06 20:44:35.711941] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:18.663 [2024-12-06 20:44:35.712515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:18.663 [2024-12-06 20:44:35.720947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:18.663 [2024-12-06 20:44:35.721162] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:18.664 [2024-12-06 20:44:35.721174] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:18.664 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.664 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:18.924 [2024-12-06 20:44:35.919949] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:18.924 [2024-12-06 20:44:35.923497] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:18.924 [2024-12-06 20:44:35.923522] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:18.924 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:18.924 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:18.924 20:44:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:18.924 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.924 20:44:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:19.184 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.184 20:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:19.184 20:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:19.184 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.184 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.755 20:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:20.015 ************************************ 00:17:20.015 END TEST test_create_multi_ublk 00:17:20.015 ************************************ 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:20.015 00:17:20.015 real 0m3.233s 00:17:20.015 user 0m0.842s 00:17:20.015 sys 0m0.130s 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.015 20:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:20.274 20:44:37 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:20.274 20:44:37 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:20.274 20:44:37 ublk -- ublk/ublk.sh@130 -- # killprocess 73582 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@954 -- # '[' -z 73582 ']' 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@958 -- # kill -0 73582 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@959 -- # uname 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73582 00:17:20.274 killing process with pid 73582 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73582' 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@973 -- # kill 73582 00:17:20.274 20:44:37 ublk -- common/autotest_common.sh@978 -- # wait 73582 00:17:20.847 [2024-12-06 20:44:37.730676] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:20.847 [2024-12-06 20:44:37.730724] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:21.414 00:17:21.414 real 0m24.361s 00:17:21.414 user 0m35.142s 00:17:21.414 sys 0m9.772s 00:17:21.414 20:44:38 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.414 20:44:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:21.414 ************************************ 00:17:21.414 END TEST ublk 00:17:21.414 ************************************ 00:17:21.414 20:44:38 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:21.414 20:44:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.414 20:44:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.414 20:44:38 -- common/autotest_common.sh@10 -- # set +x 00:17:21.414 ************************************ 00:17:21.414 START TEST ublk_recovery 00:17:21.414 ************************************ 00:17:21.414 20:44:38 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:21.414 * Looking for test storage... 00:17:21.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:21.414 20:44:38 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.414 20:44:38 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.414 20:44:38 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.673 20:44:38 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.673 20:44:38 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:21.673 20:44:38 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.673 20:44:38 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.673 --rc genhtml_branch_coverage=1 00:17:21.673 --rc genhtml_function_coverage=1 00:17:21.673 --rc genhtml_legend=1 00:17:21.673 --rc geninfo_all_blocks=1 00:17:21.673 --rc geninfo_unexecuted_blocks=1 00:17:21.673 00:17:21.673 ' 00:17:21.673 20:44:38 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.673 --rc genhtml_branch_coverage=1 00:17:21.674 --rc genhtml_function_coverage=1 00:17:21.674 --rc genhtml_legend=1 00:17:21.674 --rc geninfo_all_blocks=1 00:17:21.674 --rc geninfo_unexecuted_blocks=1 00:17:21.674 00:17:21.674 ' 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.674 --rc genhtml_branch_coverage=1 00:17:21.674 --rc genhtml_function_coverage=1 00:17:21.674 --rc genhtml_legend=1 00:17:21.674 --rc geninfo_all_blocks=1 00:17:21.674 --rc geninfo_unexecuted_blocks=1 00:17:21.674 00:17:21.674 ' 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.674 --rc genhtml_branch_coverage=1 00:17:21.674 --rc genhtml_function_coverage=1 00:17:21.674 --rc genhtml_legend=1 00:17:21.674 --rc geninfo_all_blocks=1 00:17:21.674 --rc geninfo_unexecuted_blocks=1 00:17:21.674 00:17:21.674 ' 00:17:21.674 20:44:38 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:21.674 20:44:38 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:21.674 20:44:38 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:21.674 20:44:38 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73979 00:17:21.674 20:44:38 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:21.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.674 20:44:38 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.674 20:44:38 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73979 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73979 ']' 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.674 20:44:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.674 [2024-12-06 20:44:38.662920] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:17:21.674 [2024-12-06 20:44:38.663042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73979 ] 00:17:21.933 [2024-12-06 20:44:38.819352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:21.933 [2024-12-06 20:44:38.904214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.933 [2024-12-06 20:44:38.904325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:22.505 20:44:39 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.505 [2024-12-06 20:44:39.501906] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:22.505 [2024-12-06 20:44:39.503443] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.505 20:44:39 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.505 malloc0 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.505 20:44:39 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.505 [2024-12-06 20:44:39.589006] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:22.505 [2024-12-06 20:44:39.589086] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:22.505 [2024-12-06 20:44:39.589095] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:22.505 [2024-12-06 20:44:39.589100] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:22.505 [2024-12-06 20:44:39.596925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:22.505 [2024-12-06 20:44:39.596941] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:22.505 [2024-12-06 20:44:39.604913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:22.505 [2024-12-06 20:44:39.605020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:22.505 [2024-12-06 20:44:39.627916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:22.505 1 00:17:22.505 20:44:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.505 20:44:39 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:23.888 20:44:40 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74013 00:17:23.888 20:44:40 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:23.888 20:44:40 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:23.888 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:23.888 fio-3.35 00:17:23.888 Starting 1 process 00:17:29.170 20:44:45 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73979 00:17:29.170 20:44:45 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:34.460 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73979 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:34.460 20:44:50 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:34.460 20:44:50 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74125 00:17:34.460 20:44:50 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.460 20:44:50 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74125 00:17:34.460 20:44:50 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74125 ']' 00:17:34.460 20:44:50 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.460 20:44:50 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.460 20:44:50 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.460 20:44:50 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.460 20:44:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.460 [2024-12-06 20:44:50.740526] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:17:34.460 [2024-12-06 20:44:50.740676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74125 ] 00:17:34.460 [2024-12-06 20:44:50.904755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:34.460 [2024-12-06 20:44:51.039670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.460 [2024-12-06 20:44:51.039758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:34.721 20:44:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.721 [2024-12-06 20:44:51.747916] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:34.721 [2024-12-06 20:44:51.750265] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.721 20:44:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.721 20:44:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.982 malloc0 00:17:34.982 20:44:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.982 20:44:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:34.982 20:44:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.982 20:44:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.982 [2024-12-06 20:44:51.868186] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:34.982 [2024-12-06 20:44:51.868244] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:34.982 [2024-12-06 20:44:51.868255] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:34.982 [2024-12-06 20:44:51.875965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:34.982 [2024-12-06 20:44:51.876003] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:34.982 [2024-12-06 20:44:51.876013] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:34.982 [2024-12-06 20:44:51.876109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:34.982 1 00:17:34.982 20:44:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.982 20:44:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74013 00:17:34.982 [2024-12-06 20:44:51.883930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:34.982 [2024-12-06 20:44:51.891701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:34.982 [2024-12-06 20:44:51.899154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:34.982 [2024-12-06 20:44:51.899194] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:31.289 00:18:31.289 fio_test: (groupid=0, jobs=1): err= 0: pid=74017: Fri Dec 6 20:45:40 2024 00:18:31.289 read: IOPS=27.3k, BW=107MiB/s (112MB/s)(6408MiB/60003msec) 00:18:31.289 slat (nsec): min=1096, max=206140, avg=4867.72, stdev=1525.01 00:18:31.289 clat (usec): min=662, max=6264.4k, avg=2337.62, stdev=42056.59 00:18:31.289 lat (usec): min=667, max=6264.4k, avg=2342.49, stdev=42056.59 00:18:31.289 clat percentiles (usec): 00:18:31.289 | 1.00th=[ 1663], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:18:31.289 | 30.00th=[ 1860], 40.00th=[ 1876], 50.00th=[ 1893], 60.00th=[ 1926], 00:18:31.289 | 70.00th=[ 1942], 80.00th=[ 1975], 90.00th=[ 2376], 95.00th=[ 2999], 00:18:31.289 | 99.00th=[ 5014], 99.50th=[ 5866], 99.90th=[ 7373], 99.95th=[ 8717], 00:18:31.289 | 99.99th=[13042] 00:18:31.289 bw ( KiB/s): min=39616, max=130112, per=100.00%, avg=121591.03, stdev=14867.78, samples=107 00:18:31.289 iops : min= 9904, max=32528, avg=30397.76, stdev=3716.95, samples=107 00:18:31.289 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(6403MiB/60003msec); 0 zone resets 00:18:31.289 slat (nsec): min=1087, max=323189, avg=4890.51, stdev=1568.08 00:18:31.289 clat (usec): min=625, max=6264.5k, avg=2335.25, stdev=35943.34 00:18:31.289 lat (usec): min=629, max=6264.5k, avg=2340.14, stdev=35943.34 00:18:31.289 clat percentiles (usec): 00:18:31.289 | 1.00th=[ 1696], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1926], 00:18:31.289 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:18:31.289 | 70.00th=[ 2024], 80.00th=[ 2073], 90.00th=[ 2442], 95.00th=[ 2933], 00:18:31.289 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 7177], 99.95th=[ 8586], 00:18:31.289 | 99.99th=[13042] 00:18:31.289 bw ( KiB/s): min=39048, max=130504, per=100.00%, avg=121484.71, stdev=15011.87, samples=107 00:18:31.289 iops : min= 9762, max=32626, avg=30371.18, stdev=3752.97, samples=107 00:18:31.289 lat (usec) : 750=0.01%, 1000=0.01% 00:18:31.289 lat (msec) : 2=69.62%, 4=27.75%, 10=2.60%, 20=0.03%, >=2000=0.01% 00:18:31.289 cpu : usr=6.20%, sys=27.68%, ctx=111054, majf=0, minf=13 00:18:31.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:31.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:31.289 issued rwts: total=1640524,1639108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:31.289 00:18:31.289 Run status group 0 (all jobs): 00:18:31.289 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6408MiB (6720MB), run=60003-60003msec 00:18:31.289 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6403MiB (6714MB), run=60003-60003msec 00:18:31.289 00:18:31.289 Disk stats (read/write): 00:18:31.289 ublkb1: ios=1637054/1635689, merge=0/0, ticks=3738871/3601393, in_queue=7340265, util=99.89% 00:18:31.289 20:45:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.289 [2024-12-06 20:45:40.893229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:31.289 [2024-12-06 20:45:40.925998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:31.289 [2024-12-06 20:45:40.926133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:31.289 [2024-12-06 20:45:40.935915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:31.289 [2024-12-06 20:45:40.935995] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:31.289 [2024-12-06 20:45:40.936004] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.289 20:45:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.289 [2024-12-06 20:45:40.951977] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:31.289 [2024-12-06 20:45:40.955648] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:31.289 [2024-12-06 20:45:40.955677] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.289 20:45:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:31.289 20:45:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:31.289 20:45:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74125 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74125 ']' 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74125 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74125 00:18:31.289 killing process with pid 74125 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74125' 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74125 00:18:31.289 20:45:40 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74125 00:18:31.289 [2024-12-06 20:45:42.021593] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:31.289 [2024-12-06 20:45:42.021636] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:31.289 ************************************ 00:18:31.289 END TEST ublk_recovery 00:18:31.289 ************************************ 00:18:31.289 00:18:31.289 real 1m4.296s 00:18:31.289 user 1m42.966s 00:18:31.289 sys 0m35.240s 00:18:31.289 20:45:42 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.289 20:45:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.289 20:45:42 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:31.289 20:45:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:31.289 20:45:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.289 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:18:31.289 20:45:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:31.289 20:45:42 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:31.289 20:45:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.289 20:45:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.289 20:45:42 -- common/autotest_common.sh@10 -- # set +x 00:18:31.289 ************************************ 00:18:31.289 START TEST ftl 00:18:31.289 ************************************ 00:18:31.289 20:45:42 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:31.289 * Looking for test storage... 00:18:31.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.289 20:45:42 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:31.289 20:45:42 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:31.289 20:45:42 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:31.289 20:45:42 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.289 20:45:42 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.289 20:45:42 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.289 20:45:42 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.289 20:45:42 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.289 20:45:42 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.289 20:45:42 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:31.289 20:45:42 ftl -- scripts/common.sh@345 -- # : 1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.289 20:45:42 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.289 20:45:42 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@353 -- # local d=1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.289 20:45:42 ftl -- scripts/common.sh@355 -- # echo 1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.289 20:45:42 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@353 -- # local d=2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.289 20:45:42 ftl -- scripts/common.sh@355 -- # echo 2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.289 20:45:42 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.290 20:45:42 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.290 20:45:42 ftl -- scripts/common.sh@368 -- # return 0 00:18:31.290 20:45:42 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.290 20:45:42 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.290 --rc genhtml_branch_coverage=1 00:18:31.290 --rc genhtml_function_coverage=1 00:18:31.290 --rc genhtml_legend=1 00:18:31.290 --rc geninfo_all_blocks=1 00:18:31.290 --rc geninfo_unexecuted_blocks=1 00:18:31.290 00:18:31.290 ' 00:18:31.290 20:45:42 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.290 --rc genhtml_branch_coverage=1 00:18:31.290 --rc genhtml_function_coverage=1 00:18:31.290 --rc genhtml_legend=1 00:18:31.290 --rc geninfo_all_blocks=1 00:18:31.290 --rc geninfo_unexecuted_blocks=1 00:18:31.290 00:18:31.290 ' 00:18:31.290 20:45:42 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.290 --rc genhtml_branch_coverage=1 00:18:31.290 --rc genhtml_function_coverage=1 00:18:31.290 --rc genhtml_legend=1 00:18:31.290 --rc geninfo_all_blocks=1 00:18:31.290 --rc geninfo_unexecuted_blocks=1 00:18:31.290 00:18:31.290 ' 00:18:31.290 20:45:42 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.290 --rc genhtml_branch_coverage=1 00:18:31.290 --rc genhtml_function_coverage=1 00:18:31.290 --rc genhtml_legend=1 00:18:31.290 --rc geninfo_all_blocks=1 00:18:31.290 --rc geninfo_unexecuted_blocks=1 00:18:31.290 00:18:31.290 ' 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:31.290 20:45:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:31.290 20:45:42 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.290 20:45:42 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.290 20:45:42 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:31.290 20:45:42 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:31.290 20:45:42 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.290 20:45:42 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:31.290 20:45:42 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:31.290 20:45:42 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.290 20:45:42 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.290 20:45:42 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:31.290 20:45:42 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:31.290 20:45:42 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.290 20:45:42 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.290 20:45:42 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:31.290 20:45:42 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:31.290 20:45:42 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.290 20:45:42 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.290 20:45:42 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:31.290 20:45:42 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:31.290 20:45:42 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.290 20:45:42 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.290 20:45:42 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.290 20:45:42 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.290 20:45:42 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:31.290 20:45:42 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:31.290 20:45:42 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.290 20:45:42 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:31.290 20:45:42 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:31.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:31.290 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:31.290 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:31.290 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:31.290 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:31.290 20:45:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74920 00:18:31.290 20:45:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:31.290 20:45:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74920 00:18:31.290 20:45:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 74920 ']' 00:18:31.290 20:45:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.290 20:45:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.290 20:45:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.290 20:45:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.290 20:45:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:31.290 [2024-12-06 20:45:43.553243] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:18:31.290 [2024-12-06 20:45:43.553651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74920 ] 00:18:31.290 [2024-12-06 20:45:43.716734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.290 [2024-12-06 20:45:43.846671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.290 20:45:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.290 20:45:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:31.290 20:45:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:31.290 20:45:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:31.290 20:45:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:31.290 20:45:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:31.290 20:45:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:31.290 20:45:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:31.290 20:45:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@50 -- # break 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@63 -- # break 00:18:31.290 20:45:46 ftl -- ftl/ftl.sh@66 -- # killprocess 74920 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 74920 ']' 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@958 -- # kill -0 74920 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@959 -- # uname 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74920 00:18:31.290 killing process with pid 74920 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74920' 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@973 -- # kill 74920 00:18:31.290 20:45:46 ftl -- common/autotest_common.sh@978 -- # wait 74920 00:18:31.290 20:45:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:31.290 20:45:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:31.290 20:45:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:31.290 20:45:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.290 20:45:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:31.290 ************************************ 00:18:31.290 START TEST ftl_fio_basic 00:18:31.290 ************************************ 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:31.290 * Looking for test storage... 00:18:31.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.290 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.291 --rc genhtml_branch_coverage=1 00:18:31.291 --rc genhtml_function_coverage=1 00:18:31.291 --rc genhtml_legend=1 00:18:31.291 --rc geninfo_all_blocks=1 00:18:31.291 --rc geninfo_unexecuted_blocks=1 00:18:31.291 00:18:31.291 ' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.291 --rc genhtml_branch_coverage=1 00:18:31.291 --rc genhtml_function_coverage=1 00:18:31.291 --rc genhtml_legend=1 00:18:31.291 --rc geninfo_all_blocks=1 00:18:31.291 --rc geninfo_unexecuted_blocks=1 00:18:31.291 00:18:31.291 ' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.291 --rc genhtml_branch_coverage=1 00:18:31.291 --rc genhtml_function_coverage=1 00:18:31.291 --rc genhtml_legend=1 00:18:31.291 --rc geninfo_all_blocks=1 00:18:31.291 --rc geninfo_unexecuted_blocks=1 00:18:31.291 00:18:31.291 ' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.291 --rc genhtml_branch_coverage=1 00:18:31.291 --rc genhtml_function_coverage=1 00:18:31.291 --rc genhtml_legend=1 00:18:31.291 --rc geninfo_all_blocks=1 00:18:31.291 --rc geninfo_unexecuted_blocks=1 00:18:31.291 00:18:31.291 ' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75058 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75058 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75058 ']' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.291 20:45:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:31.291 [2024-12-06 20:45:47.992985] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:18:31.291 [2024-12-06 20:45:47.993205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75058 ] 00:18:31.291 [2024-12-06 20:45:48.152504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.291 [2024-12-06 20:45:48.253456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.291 [2024-12-06 20:45:48.253725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.291 [2024-12-06 20:45:48.253797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:31.862 20:45:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:32.124 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:32.385 { 00:18:32.385 "name": "nvme0n1", 00:18:32.385 "aliases": [ 00:18:32.385 "f94ec135-3d00-4a09-913d-49916ccfb260" 00:18:32.385 ], 00:18:32.385 "product_name": "NVMe disk", 00:18:32.385 "block_size": 4096, 00:18:32.385 "num_blocks": 1310720, 00:18:32.385 "uuid": "f94ec135-3d00-4a09-913d-49916ccfb260", 00:18:32.385 "numa_id": -1, 00:18:32.385 "assigned_rate_limits": { 00:18:32.385 "rw_ios_per_sec": 0, 00:18:32.385 "rw_mbytes_per_sec": 0, 00:18:32.385 "r_mbytes_per_sec": 0, 00:18:32.385 "w_mbytes_per_sec": 0 00:18:32.385 }, 00:18:32.385 "claimed": false, 00:18:32.385 "zoned": false, 00:18:32.385 "supported_io_types": { 00:18:32.385 "read": true, 00:18:32.385 "write": true, 00:18:32.385 "unmap": true, 00:18:32.385 "flush": true, 00:18:32.385 "reset": true, 00:18:32.385 "nvme_admin": true, 00:18:32.385 "nvme_io": true, 00:18:32.385 "nvme_io_md": false, 00:18:32.385 "write_zeroes": true, 00:18:32.385 "zcopy": false, 00:18:32.385 "get_zone_info": false, 00:18:32.385 "zone_management": false, 00:18:32.385 "zone_append": false, 00:18:32.385 "compare": true, 00:18:32.385 "compare_and_write": false, 00:18:32.385 "abort": true, 00:18:32.385 "seek_hole": false, 00:18:32.385 "seek_data": false, 00:18:32.385 "copy": true, 00:18:32.385 "nvme_iov_md": false 00:18:32.385 }, 00:18:32.385 "driver_specific": { 00:18:32.385 "nvme": [ 00:18:32.385 { 00:18:32.385 "pci_address": "0000:00:11.0", 00:18:32.385 "trid": { 00:18:32.385 "trtype": "PCIe", 00:18:32.385 "traddr": "0000:00:11.0" 00:18:32.385 }, 00:18:32.385 "ctrlr_data": { 00:18:32.385 "cntlid": 0, 00:18:32.385 "vendor_id": "0x1b36", 00:18:32.385 "model_number": "QEMU NVMe Ctrl", 00:18:32.385 "serial_number": "12341", 00:18:32.385 "firmware_revision": "8.0.0", 00:18:32.385 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:32.385 "oacs": { 00:18:32.385 "security": 0, 00:18:32.385 "format": 1, 00:18:32.385 "firmware": 0, 00:18:32.385 "ns_manage": 1 00:18:32.385 }, 00:18:32.385 "multi_ctrlr": false, 00:18:32.385 "ana_reporting": false 00:18:32.385 }, 00:18:32.385 "vs": { 00:18:32.385 "nvme_version": "1.4" 00:18:32.385 }, 00:18:32.385 "ns_data": { 00:18:32.385 "id": 1, 00:18:32.385 "can_share": false 00:18:32.385 } 00:18:32.385 } 00:18:32.385 ], 00:18:32.385 "mp_policy": "active_passive" 00:18:32.385 } 00:18:32.385 } 00:18:32.385 ]' 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:32.385 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:32.644 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:32.644 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c5609759-bbd3-4f9d-b220-5bb79a1fd3fa 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c5609759-bbd3-4f9d-b220-5bb79a1fd3fa 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:32.905 20:45:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:33.166 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:33.166 { 00:18:33.166 "name": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:33.166 "aliases": [ 00:18:33.166 "lvs/nvme0n1p0" 00:18:33.166 ], 00:18:33.166 "product_name": "Logical Volume", 00:18:33.166 "block_size": 4096, 00:18:33.166 "num_blocks": 26476544, 00:18:33.166 "uuid": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:33.166 "assigned_rate_limits": { 00:18:33.166 "rw_ios_per_sec": 0, 00:18:33.166 "rw_mbytes_per_sec": 0, 00:18:33.166 "r_mbytes_per_sec": 0, 00:18:33.166 "w_mbytes_per_sec": 0 00:18:33.166 }, 00:18:33.166 "claimed": false, 00:18:33.167 "zoned": false, 00:18:33.167 "supported_io_types": { 00:18:33.167 "read": true, 00:18:33.167 "write": true, 00:18:33.167 "unmap": true, 00:18:33.167 "flush": false, 00:18:33.167 "reset": true, 00:18:33.167 "nvme_admin": false, 00:18:33.167 "nvme_io": false, 00:18:33.167 "nvme_io_md": false, 00:18:33.167 "write_zeroes": true, 00:18:33.167 "zcopy": false, 00:18:33.167 "get_zone_info": false, 00:18:33.167 "zone_management": false, 00:18:33.167 "zone_append": false, 00:18:33.167 "compare": false, 00:18:33.167 "compare_and_write": false, 00:18:33.167 "abort": false, 00:18:33.167 "seek_hole": true, 00:18:33.167 "seek_data": true, 00:18:33.167 "copy": false, 00:18:33.167 "nvme_iov_md": false 00:18:33.167 }, 00:18:33.167 "driver_specific": { 00:18:33.167 "lvol": { 00:18:33.167 "lvol_store_uuid": "c5609759-bbd3-4f9d-b220-5bb79a1fd3fa", 00:18:33.167 "base_bdev": "nvme0n1", 00:18:33.167 "thin_provision": true, 00:18:33.167 "num_allocated_clusters": 0, 00:18:33.167 "snapshot": false, 00:18:33.167 "clone": false, 00:18:33.167 "esnap_clone": false 00:18:33.167 } 00:18:33.167 } 00:18:33.167 } 00:18:33.167 ]' 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:33.167 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:33.426 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:33.687 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:33.687 { 00:18:33.687 "name": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:33.687 "aliases": [ 00:18:33.687 "lvs/nvme0n1p0" 00:18:33.687 ], 00:18:33.687 "product_name": "Logical Volume", 00:18:33.687 "block_size": 4096, 00:18:33.687 "num_blocks": 26476544, 00:18:33.687 "uuid": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:33.687 "assigned_rate_limits": { 00:18:33.688 "rw_ios_per_sec": 0, 00:18:33.688 "rw_mbytes_per_sec": 0, 00:18:33.688 "r_mbytes_per_sec": 0, 00:18:33.688 "w_mbytes_per_sec": 0 00:18:33.688 }, 00:18:33.688 "claimed": false, 00:18:33.688 "zoned": false, 00:18:33.688 "supported_io_types": { 00:18:33.688 "read": true, 00:18:33.688 "write": true, 00:18:33.688 "unmap": true, 00:18:33.688 "flush": false, 00:18:33.688 "reset": true, 00:18:33.688 "nvme_admin": false, 00:18:33.688 "nvme_io": false, 00:18:33.688 "nvme_io_md": false, 00:18:33.688 "write_zeroes": true, 00:18:33.688 "zcopy": false, 00:18:33.688 "get_zone_info": false, 00:18:33.688 "zone_management": false, 00:18:33.688 "zone_append": false, 00:18:33.688 "compare": false, 00:18:33.688 "compare_and_write": false, 00:18:33.688 "abort": false, 00:18:33.688 "seek_hole": true, 00:18:33.688 "seek_data": true, 00:18:33.688 "copy": false, 00:18:33.688 "nvme_iov_md": false 00:18:33.688 }, 00:18:33.688 "driver_specific": { 00:18:33.688 "lvol": { 00:18:33.688 "lvol_store_uuid": "c5609759-bbd3-4f9d-b220-5bb79a1fd3fa", 00:18:33.688 "base_bdev": "nvme0n1", 00:18:33.688 "thin_provision": true, 00:18:33.688 "num_allocated_clusters": 0, 00:18:33.688 "snapshot": false, 00:18:33.688 "clone": false, 00:18:33.688 "esnap_clone": false 00:18:33.688 } 00:18:33.688 } 00:18:33.688 } 00:18:33.688 ]' 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:33.688 20:45:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:33.949 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:33.949 20:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3009488b-d1f0-4926-b3a0-d9216a994a2f 00:18:34.209 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:34.209 { 00:18:34.209 "name": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:34.209 "aliases": [ 00:18:34.209 "lvs/nvme0n1p0" 00:18:34.209 ], 00:18:34.209 "product_name": "Logical Volume", 00:18:34.209 "block_size": 4096, 00:18:34.209 "num_blocks": 26476544, 00:18:34.209 "uuid": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:34.209 "assigned_rate_limits": { 00:18:34.209 "rw_ios_per_sec": 0, 00:18:34.210 "rw_mbytes_per_sec": 0, 00:18:34.210 "r_mbytes_per_sec": 0, 00:18:34.210 "w_mbytes_per_sec": 0 00:18:34.210 }, 00:18:34.210 "claimed": false, 00:18:34.210 "zoned": false, 00:18:34.210 "supported_io_types": { 00:18:34.210 "read": true, 00:18:34.210 "write": true, 00:18:34.210 "unmap": true, 00:18:34.210 "flush": false, 00:18:34.210 "reset": true, 00:18:34.210 "nvme_admin": false, 00:18:34.210 "nvme_io": false, 00:18:34.210 "nvme_io_md": false, 00:18:34.210 "write_zeroes": true, 00:18:34.210 "zcopy": false, 00:18:34.210 "get_zone_info": false, 00:18:34.210 "zone_management": false, 00:18:34.210 "zone_append": false, 00:18:34.210 "compare": false, 00:18:34.210 "compare_and_write": false, 00:18:34.210 "abort": false, 00:18:34.210 "seek_hole": true, 00:18:34.210 "seek_data": true, 00:18:34.210 "copy": false, 00:18:34.210 "nvme_iov_md": false 00:18:34.210 }, 00:18:34.210 "driver_specific": { 00:18:34.210 "lvol": { 00:18:34.210 "lvol_store_uuid": "c5609759-bbd3-4f9d-b220-5bb79a1fd3fa", 00:18:34.210 "base_bdev": "nvme0n1", 00:18:34.210 "thin_provision": true, 00:18:34.210 "num_allocated_clusters": 0, 00:18:34.210 "snapshot": false, 00:18:34.210 "clone": false, 00:18:34.210 "esnap_clone": false 00:18:34.210 } 00:18:34.210 } 00:18:34.210 } 00:18:34.210 ]' 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:34.210 20:45:51 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3009488b-d1f0-4926-b3a0-d9216a994a2f -c nvc0n1p0 --l2p_dram_limit 60 00:18:34.472 [2024-12-06 20:45:51.464006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.464062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:34.472 [2024-12-06 20:45:51.464080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.472 [2024-12-06 20:45:51.464091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.464169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.464182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.472 [2024-12-06 20:45:51.464197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:34.472 [2024-12-06 20:45:51.464205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.464248] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:34.472 [2024-12-06 20:45:51.465062] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:34.472 [2024-12-06 20:45:51.465100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.465109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.472 [2024-12-06 20:45:51.465121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:18:34.472 [2024-12-06 20:45:51.465129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.465177] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 185c2959-5a79-4034-b7f0-151ba2bee215 00:18:34.472 [2024-12-06 20:45:51.466991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.467036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:34.472 [2024-12-06 20:45:51.467049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:34.472 [2024-12-06 20:45:51.467061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.475813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.475864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.472 [2024-12-06 20:45:51.475876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.663 ms 00:18:34.472 [2024-12-06 20:45:51.476088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.476278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.476304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.472 [2024-12-06 20:45:51.476316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:34.472 [2024-12-06 20:45:51.476331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.476389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.472 [2024-12-06 20:45:51.476401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:34.472 [2024-12-06 20:45:51.476410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:34.472 [2024-12-06 20:45:51.476420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.472 [2024-12-06 20:45:51.476450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:34.472 [2024-12-06 20:45:51.480911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.473 [2024-12-06 20:45:51.480953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.473 [2024-12-06 20:45:51.480969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:18:34.473 [2024-12-06 20:45:51.480980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.473 [2024-12-06 20:45:51.481035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.473 [2024-12-06 20:45:51.481044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:34.473 [2024-12-06 20:45:51.481055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:34.473 [2024-12-06 20:45:51.481063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.473 [2024-12-06 20:45:51.481127] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:34.473 [2024-12-06 20:45:51.481299] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:34.473 [2024-12-06 20:45:51.481317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:34.473 [2024-12-06 20:45:51.481330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:34.473 [2024-12-06 20:45:51.481343] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481352] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481364] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:34.473 [2024-12-06 20:45:51.481372] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:34.473 [2024-12-06 20:45:51.481382] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:34.473 [2024-12-06 20:45:51.481390] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:34.473 [2024-12-06 20:45:51.481401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.473 [2024-12-06 20:45:51.481412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:34.473 [2024-12-06 20:45:51.481422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:34.473 [2024-12-06 20:45:51.481431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.473 [2024-12-06 20:45:51.481532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.473 [2024-12-06 20:45:51.481541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:34.473 [2024-12-06 20:45:51.481551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:34.473 [2024-12-06 20:45:51.481559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.473 [2024-12-06 20:45:51.481682] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:34.473 [2024-12-06 20:45:51.481692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:34.473 [2024-12-06 20:45:51.481706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:34.473 [2024-12-06 20:45:51.481731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:34.473 [2024-12-06 20:45:51.481758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.473 [2024-12-06 20:45:51.481774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:34.473 [2024-12-06 20:45:51.481780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:34.473 [2024-12-06 20:45:51.481789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.473 [2024-12-06 20:45:51.481796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:34.473 [2024-12-06 20:45:51.481805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:34.473 [2024-12-06 20:45:51.481812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:34.473 [2024-12-06 20:45:51.481835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:34.473 [2024-12-06 20:45:51.481861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:34.473 [2024-12-06 20:45:51.481884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:34.473 [2024-12-06 20:45:51.481931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:34.473 [2024-12-06 20:45:51.481955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:34.473 [2024-12-06 20:45:51.481963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.473 [2024-12-06 20:45:51.481970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:34.473 [2024-12-06 20:45:51.481982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:34.473 [2024-12-06 20:45:51.482005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.473 [2024-12-06 20:45:51.482015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:34.473 [2024-12-06 20:45:51.482022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:34.473 [2024-12-06 20:45:51.482031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.473 [2024-12-06 20:45:51.482038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:34.473 [2024-12-06 20:45:51.482047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:34.473 [2024-12-06 20:45:51.482053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.482062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:34.473 [2024-12-06 20:45:51.482069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:34.473 [2024-12-06 20:45:51.482078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.482085] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:34.473 [2024-12-06 20:45:51.482095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:34.473 [2024-12-06 20:45:51.482103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.473 [2024-12-06 20:45:51.482113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.473 [2024-12-06 20:45:51.482121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:34.473 [2024-12-06 20:45:51.482133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:34.473 [2024-12-06 20:45:51.482141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:34.473 [2024-12-06 20:45:51.482151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:34.473 [2024-12-06 20:45:51.482158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:34.473 [2024-12-06 20:45:51.482167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:34.473 [2024-12-06 20:45:51.482177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:34.473 [2024-12-06 20:45:51.482189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.473 [2024-12-06 20:45:51.482198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:34.473 [2024-12-06 20:45:51.482208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:34.473 [2024-12-06 20:45:51.482215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:34.473 [2024-12-06 20:45:51.482224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:34.473 [2024-12-06 20:45:51.482233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:34.473 [2024-12-06 20:45:51.482243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:34.473 [2024-12-06 20:45:51.482251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:34.473 [2024-12-06 20:45:51.482261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:34.473 [2024-12-06 20:45:51.482269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:34.473 [2024-12-06 20:45:51.482280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:34.473 [2024-12-06 20:45:51.482288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:34.473 [2024-12-06 20:45:51.482296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:34.473 [2024-12-06 20:45:51.482304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:34.473 [2024-12-06 20:45:51.482313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:34.473 [2024-12-06 20:45:51.482320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:34.473 [2024-12-06 20:45:51.482331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.473 [2024-12-06 20:45:51.482341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:34.474 [2024-12-06 20:45:51.482350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:34.474 [2024-12-06 20:45:51.482358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:34.474 [2024-12-06 20:45:51.482366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:34.474 [2024-12-06 20:45:51.482375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.474 [2024-12-06 20:45:51.482385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:34.474 [2024-12-06 20:45:51.482392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:18:34.474 [2024-12-06 20:45:51.482402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.474 [2024-12-06 20:45:51.482467] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:34.474 [2024-12-06 20:45:51.482482] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:37.769 [2024-12-06 20:45:54.263644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.263704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:37.769 [2024-12-06 20:45:54.263719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2781.168 ms 00:18:37.769 [2024-12-06 20:45:54.263729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.769 [2024-12-06 20:45:54.288206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.288368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.769 [2024-12-06 20:45:54.288387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.262 ms 00:18:37.769 [2024-12-06 20:45:54.288396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.769 [2024-12-06 20:45:54.288514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.288526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:37.769 [2024-12-06 20:45:54.288535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:37.769 [2024-12-06 20:45:54.288546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.769 [2024-12-06 20:45:54.328532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.328574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.769 [2024-12-06 20:45:54.328589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.946 ms 00:18:37.769 [2024-12-06 20:45:54.328600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.769 [2024-12-06 20:45:54.328635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.328645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.769 [2024-12-06 20:45:54.328654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:37.769 [2024-12-06 20:45:54.328662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.769 [2024-12-06 20:45:54.329037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.329056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.769 [2024-12-06 20:45:54.329065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:18:37.769 [2024-12-06 20:45:54.329076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.769 [2024-12-06 20:45:54.329195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.769 [2024-12-06 20:45:54.329206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.769 [2024-12-06 20:45:54.329214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:37.770 [2024-12-06 20:45:54.329225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.343289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.343320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.770 [2024-12-06 20:45:54.343330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.043 ms 00:18:37.770 [2024-12-06 20:45:54.343339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.354552] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:37.770 [2024-12-06 20:45:54.368319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.368350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:37.770 [2024-12-06 20:45:54.368365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.901 ms 00:18:37.770 [2024-12-06 20:45:54.368372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.417277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.417412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:37.770 [2024-12-06 20:45:54.417437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.869 ms 00:18:37.770 [2024-12-06 20:45:54.417446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.417608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.417619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:37.770 [2024-12-06 20:45:54.417631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:37.770 [2024-12-06 20:45:54.417638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.440498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.440612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:37.770 [2024-12-06 20:45:54.440632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.822 ms 00:18:37.770 [2024-12-06 20:45:54.440640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.462389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.462417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:37.770 [2024-12-06 20:45:54.462431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.721 ms 00:18:37.770 [2024-12-06 20:45:54.462438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.462992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.463006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:37.770 [2024-12-06 20:45:54.463017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:18:37.770 [2024-12-06 20:45:54.463024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.525491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.525524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:37.770 [2024-12-06 20:45:54.525539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.428 ms 00:18:37.770 [2024-12-06 20:45:54.525550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.549040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.549071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:37.770 [2024-12-06 20:45:54.549084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.412 ms 00:18:37.770 [2024-12-06 20:45:54.549092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.571765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.571795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:37.770 [2024-12-06 20:45:54.571807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:18:37.770 [2024-12-06 20:45:54.571815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.594932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.594962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:37.770 [2024-12-06 20:45:54.594974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.081 ms 00:18:37.770 [2024-12-06 20:45:54.594981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.595023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.595033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:37.770 [2024-12-06 20:45:54.595047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:37.770 [2024-12-06 20:45:54.595055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.595133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.770 [2024-12-06 20:45:54.595143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:37.770 [2024-12-06 20:45:54.595152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:37.770 [2024-12-06 20:45:54.595160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.770 [2024-12-06 20:45:54.596018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3131.614 ms, result 0 00:18:37.770 { 00:18:37.770 "name": "ftl0", 00:18:37.770 "uuid": "185c2959-5a79-4034-b7f0-151ba2bee215" 00:18:37.770 } 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:37.770 20:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:38.030 [ 00:18:38.030 { 00:18:38.030 "name": "ftl0", 00:18:38.030 "aliases": [ 00:18:38.030 "185c2959-5a79-4034-b7f0-151ba2bee215" 00:18:38.030 ], 00:18:38.030 "product_name": "FTL disk", 00:18:38.030 "block_size": 4096, 00:18:38.030 "num_blocks": 20971520, 00:18:38.030 "uuid": "185c2959-5a79-4034-b7f0-151ba2bee215", 00:18:38.030 "assigned_rate_limits": { 00:18:38.030 "rw_ios_per_sec": 0, 00:18:38.030 "rw_mbytes_per_sec": 0, 00:18:38.030 "r_mbytes_per_sec": 0, 00:18:38.030 "w_mbytes_per_sec": 0 00:18:38.030 }, 00:18:38.030 "claimed": false, 00:18:38.030 "zoned": false, 00:18:38.030 "supported_io_types": { 00:18:38.030 "read": true, 00:18:38.030 "write": true, 00:18:38.030 "unmap": true, 00:18:38.030 "flush": true, 00:18:38.030 "reset": false, 00:18:38.030 "nvme_admin": false, 00:18:38.030 "nvme_io": false, 00:18:38.030 "nvme_io_md": false, 00:18:38.030 "write_zeroes": true, 00:18:38.030 "zcopy": false, 00:18:38.030 "get_zone_info": false, 00:18:38.030 "zone_management": false, 00:18:38.030 "zone_append": false, 00:18:38.030 "compare": false, 00:18:38.030 "compare_and_write": false, 00:18:38.030 "abort": false, 00:18:38.030 "seek_hole": false, 00:18:38.030 "seek_data": false, 00:18:38.030 "copy": false, 00:18:38.030 "nvme_iov_md": false 00:18:38.030 }, 00:18:38.030 "driver_specific": { 00:18:38.030 "ftl": { 00:18:38.030 "base_bdev": "3009488b-d1f0-4926-b3a0-d9216a994a2f", 00:18:38.030 "cache": "nvc0n1p0" 00:18:38.030 } 00:18:38.030 } 00:18:38.030 } 00:18:38.030 ] 00:18:38.030 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:38.030 20:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:38.030 20:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:38.291 20:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:38.291 20:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:38.291 [2024-12-06 20:45:55.400817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.291 [2024-12-06 20:45:55.400861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:38.291 [2024-12-06 20:45:55.400874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:38.291 [2024-12-06 20:45:55.400884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.291 [2024-12-06 20:45:55.400933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.291 [2024-12-06 20:45:55.403505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.291 [2024-12-06 20:45:55.403533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:38.291 [2024-12-06 20:45:55.403546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:18:38.291 [2024-12-06 20:45:55.403554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.291 [2024-12-06 20:45:55.403945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.291 [2024-12-06 20:45:55.403960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:38.291 [2024-12-06 20:45:55.403971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:18:38.291 [2024-12-06 20:45:55.403978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.291 [2024-12-06 20:45:55.407216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.291 [2024-12-06 20:45:55.407238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:38.291 [2024-12-06 20:45:55.407248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:18:38.291 [2024-12-06 20:45:55.407256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.291 [2024-12-06 20:45:55.413475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.291 [2024-12-06 20:45:55.413500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:38.291 [2024-12-06 20:45:55.413512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.196 ms 00:18:38.291 [2024-12-06 20:45:55.413519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.436602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.436634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:38.552 [2024-12-06 20:45:55.436657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.007 ms 00:18:38.552 [2024-12-06 20:45:55.436665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.451237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.451271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:38.552 [2024-12-06 20:45:55.451287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:18:38.552 [2024-12-06 20:45:55.451295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.451467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.451477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:38.552 [2024-12-06 20:45:55.451487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:18:38.552 [2024-12-06 20:45:55.451495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.474434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.474461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:38.552 [2024-12-06 20:45:55.474473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.911 ms 00:18:38.552 [2024-12-06 20:45:55.474480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.496739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.496763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:38.552 [2024-12-06 20:45:55.496774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.219 ms 00:18:38.552 [2024-12-06 20:45:55.496781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.518880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.518912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:38.552 [2024-12-06 20:45:55.518923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.060 ms 00:18:38.552 [2024-12-06 20:45:55.518930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.541391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.552 [2024-12-06 20:45:55.541416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:38.552 [2024-12-06 20:45:55.541427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.379 ms 00:18:38.552 [2024-12-06 20:45:55.541434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.552 [2024-12-06 20:45:55.541472] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:38.552 [2024-12-06 20:45:55.541485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:38.552 [2024-12-06 20:45:55.541743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.541999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:38.553 [2024-12-06 20:45:55.542349] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:38.553 [2024-12-06 20:45:55.542358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 185c2959-5a79-4034-b7f0-151ba2bee215 00:18:38.553 [2024-12-06 20:45:55.542365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:38.553 [2024-12-06 20:45:55.542375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:38.553 [2024-12-06 20:45:55.542382] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:38.553 [2024-12-06 20:45:55.542393] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:38.553 [2024-12-06 20:45:55.542399] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:38.553 [2024-12-06 20:45:55.542409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:38.553 [2024-12-06 20:45:55.542415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:38.553 [2024-12-06 20:45:55.542423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:38.553 [2024-12-06 20:45:55.542429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:38.553 [2024-12-06 20:45:55.542438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.553 [2024-12-06 20:45:55.542445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:38.553 [2024-12-06 20:45:55.542455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:18:38.553 [2024-12-06 20:45:55.542462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.553 [2024-12-06 20:45:55.554731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.553 [2024-12-06 20:45:55.554757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:38.553 [2024-12-06 20:45:55.554768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.224 ms 00:18:38.553 [2024-12-06 20:45:55.554775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.553 [2024-12-06 20:45:55.555140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.553 [2024-12-06 20:45:55.555149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:38.553 [2024-12-06 20:45:55.555159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:18:38.553 [2024-12-06 20:45:55.555167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.553 [2024-12-06 20:45:55.598572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.553 [2024-12-06 20:45:55.598604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.554 [2024-12-06 20:45:55.598615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.554 [2024-12-06 20:45:55.598623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.554 [2024-12-06 20:45:55.598676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.554 [2024-12-06 20:45:55.598685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.554 [2024-12-06 20:45:55.598694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.554 [2024-12-06 20:45:55.598701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.554 [2024-12-06 20:45:55.598778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.554 [2024-12-06 20:45:55.598794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.554 [2024-12-06 20:45:55.598804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.554 [2024-12-06 20:45:55.598811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.554 [2024-12-06 20:45:55.598835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.554 [2024-12-06 20:45:55.598842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.554 [2024-12-06 20:45:55.598851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.554 [2024-12-06 20:45:55.598858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.554 [2024-12-06 20:45:55.678232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.554 [2024-12-06 20:45:55.678271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.554 [2024-12-06 20:45:55.678283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.554 [2024-12-06 20:45:55.678292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.739737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.739771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.815 [2024-12-06 20:45:55.739783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.739791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.739874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.739884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.815 [2024-12-06 20:45:55.739914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.739922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.739981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.739990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.815 [2024-12-06 20:45:55.740000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.740007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.740108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.740118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.815 [2024-12-06 20:45:55.740127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.740136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.740184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.740193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.815 [2024-12-06 20:45:55.740202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.740210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.740252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.740261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.815 [2024-12-06 20:45:55.740270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.740278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.740344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.815 [2024-12-06 20:45:55.740354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.815 [2024-12-06 20:45:55.740363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.815 [2024-12-06 20:45:55.740370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.815 [2024-12-06 20:45:55.740517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.678 ms, result 0 00:18:38.815 true 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75058 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75058 ']' 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75058 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75058 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.815 killing process with pid 75058 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75058' 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75058 00:18:38.815 20:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75058 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:45.404 20:46:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:45.404 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:45.404 fio-3.35 00:18:45.404 Starting 1 thread 00:18:51.991 00:18:51.991 test: (groupid=0, jobs=1): err= 0: pid=75244: Fri Dec 6 20:46:08 2024 00:18:51.991 read: IOPS=786, BW=52.2MiB/s (54.8MB/s)(255MiB/4874msec) 00:18:51.991 slat (nsec): min=3084, max=47934, avg=7244.51, stdev=3873.37 00:18:51.991 clat (usec): min=281, max=1779, avg=574.63, stdev=207.95 00:18:51.991 lat (usec): min=285, max=1790, avg=581.87, stdev=209.49 00:18:51.991 clat percentiles (usec): 00:18:51.991 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 412], 00:18:51.991 | 30.00th=[ 469], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 570], 00:18:51.991 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 922], 95.00th=[ 996], 00:18:51.991 | 99.00th=[ 1156], 99.50th=[ 1254], 99.90th=[ 1500], 99.95th=[ 1565], 00:18:51.991 | 99.99th=[ 1778] 00:18:51.991 write: IOPS=791, BW=52.6MiB/s (55.1MB/s)(256MiB/4870msec); 0 zone resets 00:18:51.991 slat (usec): min=14, max=118, avg=26.04, stdev= 7.90 00:18:51.991 clat (usec): min=270, max=1484, avg=644.11, stdev=216.15 00:18:51.991 lat (usec): min=293, max=1515, avg=670.15, stdev=219.33 00:18:51.991 clat percentiles (usec): 00:18:51.991 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 355], 20.00th=[ 482], 00:18:51.991 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 635], 60.00th=[ 652], 00:18:51.991 | 70.00th=[ 685], 80.00th=[ 750], 90.00th=[ 1004], 95.00th=[ 1057], 00:18:51.991 | 99.00th=[ 1221], 99.50th=[ 1303], 99.90th=[ 1434], 99.95th=[ 1450], 00:18:51.991 | 99.99th=[ 1483] 00:18:51.991 bw ( KiB/s): min=41208, max=62288, per=96.36%, avg=51880.33, stdev=7590.21, samples=9 00:18:51.991 iops : min= 606, max= 916, avg=762.89, stdev=111.63, samples=9 00:18:51.991 lat (usec) : 500=29.09%, 750=52.36%, 1000=11.15% 00:18:51.991 lat (msec) : 2=7.40% 00:18:51.991 cpu : usr=98.89%, sys=0.06%, ctx=10, majf=0, minf=1169 00:18:51.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.991 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.991 00:18:51.991 Run status group 0 (all jobs): 00:18:51.991 READ: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=255MiB (267MB), run=4874-4874msec 00:18:51.991 WRITE: bw=52.6MiB/s (55.1MB/s), 52.6MiB/s-52.6MiB/s (55.1MB/s-55.1MB/s), io=256MiB (269MB), run=4870-4870msec 00:18:52.933 ----------------------------------------------------- 00:18:52.933 Suppressions used: 00:18:52.933 count bytes template 00:18:52.933 1 5 /usr/src/fio/parse.c 00:18:52.933 1 8 libtcmalloc_minimal.so 00:18:52.933 1 904 libcrypto.so 00:18:52.933 ----------------------------------------------------- 00:18:52.933 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:52.933 20:46:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:52.933 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:52.933 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:52.933 fio-3.35 00:18:52.933 Starting 2 threads 00:19:19.552 00:19:19.552 first_half: (groupid=0, jobs=1): err= 0: pid=75359: Fri Dec 6 20:46:35 2024 00:19:19.552 read: IOPS=2711, BW=10.6MiB/s (11.1MB/s)(255MiB/24058msec) 00:19:19.552 slat (nsec): min=3108, max=36589, avg=4149.64, stdev=985.76 00:19:19.552 clat (usec): min=566, max=447824, avg=36670.20, stdev=20706.92 00:19:19.552 lat (usec): min=570, max=447828, avg=36674.35, stdev=20707.00 00:19:19.552 clat percentiles (msec): 00:19:19.552 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:19:19.552 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 34], 00:19:19.552 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 43], 95.00th=[ 58], 00:19:19.552 | 99.00th=[ 138], 99.50th=[ 167], 99.90th=[ 232], 99.95th=[ 351], 00:19:19.552 | 99.99th=[ 435] 00:19:19.552 write: IOPS=3260, BW=12.7MiB/s (13.4MB/s)(256MiB/20098msec); 0 zone resets 00:19:19.552 slat (usec): min=3, max=2251, avg= 5.85, stdev=19.44 00:19:19.552 clat (usec): min=358, max=95230, avg=10430.97, stdev=15777.15 00:19:19.552 lat (usec): min=364, max=95235, avg=10436.82, stdev=15777.24 00:19:19.552 clat percentiles (usec): 00:19:19.552 | 1.00th=[ 701], 5.00th=[ 865], 10.00th=[ 1020], 20.00th=[ 1303], 00:19:19.552 | 30.00th=[ 2245], 40.00th=[ 3916], 50.00th=[ 5342], 60.00th=[ 6325], 00:19:19.552 | 70.00th=[ 8979], 80.00th=[13566], 90.00th=[20317], 95.00th=[59507], 00:19:19.552 | 99.00th=[71828], 99.50th=[78119], 99.90th=[90702], 99.95th=[92799], 00:19:19.552 | 99.99th=[94897] 00:19:19.552 bw ( KiB/s): min= 384, max=42840, per=95.87%, avg=22791.43, stdev=14551.09, samples=23 00:19:19.552 iops : min= 96, max=10710, avg=5697.83, stdev=3637.73, samples=23 00:19:19.552 lat (usec) : 500=0.02%, 750=1.03%, 1000=3.68% 00:19:19.552 lat (msec) : 2=9.79%, 4=6.04%, 10=16.30%, 20=9.09%, 50=47.91% 00:19:19.552 lat (msec) : 100=4.98%, 250=1.10%, 500=0.05% 00:19:19.552 cpu : usr=99.20%, sys=0.21%, ctx=37, majf=0, minf=5587 00:19:19.552 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:19.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.552 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.552 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.552 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.552 second_half: (groupid=0, jobs=1): err= 0: pid=75360: Fri Dec 6 20:46:35 2024 00:19:19.552 read: IOPS=2694, BW=10.5MiB/s (11.0MB/s)(255MiB/24245msec) 00:19:19.552 slat (usec): min=3, max=566, avg= 5.25, stdev= 3.12 00:19:19.552 clat (usec): min=682, max=471758, avg=36247.08, stdev=24006.76 00:19:19.552 lat (usec): min=686, max=471763, avg=36252.34, stdev=24006.78 00:19:19.552 clat percentiles (msec): 00:19:19.552 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:19:19.552 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:19:19.552 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 51], 00:19:19.552 | 99.00th=[ 153], 99.50th=[ 190], 99.90th=[ 288], 99.95th=[ 359], 00:19:19.552 | 99.99th=[ 460] 00:19:19.552 write: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(256MiB/22054msec); 0 zone resets 00:19:19.552 slat (usec): min=3, max=1037, avg= 6.62, stdev= 7.03 00:19:19.552 clat (usec): min=373, max=96108, avg=11206.46, stdev=16884.10 00:19:19.552 lat (usec): min=387, max=96113, avg=11213.08, stdev=16883.86 00:19:19.552 clat percentiles (usec): 00:19:19.552 | 1.00th=[ 685], 5.00th=[ 873], 10.00th=[ 1037], 20.00th=[ 1336], 00:19:19.552 | 30.00th=[ 1958], 40.00th=[ 3458], 50.00th=[ 4752], 60.00th=[ 5866], 00:19:19.552 | 70.00th=[ 8848], 80.00th=[15008], 90.00th=[33162], 95.00th=[60031], 00:19:19.553 | 99.00th=[72877], 99.50th=[79168], 99.90th=[90702], 99.95th=[92799], 00:19:19.553 | 99.99th=[94897] 00:19:19.553 bw ( KiB/s): min= 8, max=54544, per=81.69%, avg=19420.33, stdev=16276.01, samples=27 00:19:19.553 iops : min= 2, max=13636, avg=4855.07, stdev=4069.00, samples=27 00:19:19.553 lat (usec) : 500=0.02%, 750=1.04%, 1000=3.32% 00:19:19.553 lat (msec) : 2=10.85%, 4=7.13%, 10=15.20%, 20=7.66%, 50=48.91% 00:19:19.553 lat (msec) : 100=4.66%, 250=1.09%, 500=0.13% 00:19:19.553 cpu : usr=98.61%, sys=0.34%, ctx=121, majf=0, minf=5536 00:19:19.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:19.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.553 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.553 issued rwts: total=65320,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.553 00:19:19.553 Run status group 0 (all jobs): 00:19:19.553 READ: bw=21.0MiB/s (22.1MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=510MiB (535MB), run=24058-24245msec 00:19:19.553 WRITE: bw=23.2MiB/s (24.3MB/s), 11.6MiB/s-12.7MiB/s (12.2MB/s-13.4MB/s), io=512MiB (537MB), run=20098-22054msec 00:19:20.939 ----------------------------------------------------- 00:19:20.939 Suppressions used: 00:19:20.939 count bytes template 00:19:20.939 2 10 /usr/src/fio/parse.c 00:19:20.939 1 96 /usr/src/fio/iolog.c 00:19:20.939 1 8 libtcmalloc_minimal.so 00:19:20.939 1 904 libcrypto.so 00:19:20.939 ----------------------------------------------------- 00:19:20.939 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:20.939 20:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:20.939 20:46:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:20.939 20:46:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:20.939 20:46:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:20.939 20:46:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:20.939 20:46:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:21.201 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:21.201 fio-3.35 00:19:21.201 Starting 1 thread 00:19:39.321 00:19:39.321 test: (groupid=0, jobs=1): err= 0: pid=75688: Fri Dec 6 20:46:54 2024 00:19:39.321 read: IOPS=7850, BW=30.7MiB/s (32.2MB/s)(255MiB/8305msec) 00:19:39.321 slat (usec): min=3, max=129, avg= 4.08, stdev= 1.60 00:19:39.321 clat (usec): min=477, max=42078, avg=16295.20, stdev=2976.24 00:19:39.321 lat (usec): min=484, max=42083, avg=16299.28, stdev=2977.00 00:19:39.321 clat percentiles (usec): 00:19:39.321 | 1.00th=[13566], 5.00th=[14353], 10.00th=[14484], 20.00th=[14615], 00:19:39.321 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:19:39.321 | 70.00th=[15533], 80.00th=[17171], 90.00th=[21627], 95.00th=[23462], 00:19:39.321 | 99.00th=[25560], 99.50th=[26608], 99.90th=[31065], 99.95th=[35390], 00:19:39.321 | 99.99th=[41157] 00:19:39.321 write: IOPS=10.3k, BW=40.4MiB/s (42.3MB/s)(256MiB/6342msec); 0 zone resets 00:19:39.321 slat (usec): min=4, max=433, avg= 6.31, stdev= 3.86 00:19:39.321 clat (usec): min=486, max=69948, avg=12341.40, stdev=11670.25 00:19:39.321 lat (usec): min=491, max=69955, avg=12347.71, stdev=11670.47 00:19:39.321 clat percentiles (usec): 00:19:39.321 | 1.00th=[ 709], 5.00th=[ 857], 10.00th=[ 955], 20.00th=[ 1106], 00:19:39.321 | 30.00th=[ 1270], 40.00th=[ 2147], 50.00th=[12125], 60.00th=[14877], 00:19:39.321 | 70.00th=[17171], 80.00th=[20317], 90.00th=[31327], 95.00th=[33162], 00:19:39.321 | 99.00th=[41157], 99.50th=[45876], 99.90th=[64750], 99.95th=[65799], 00:19:39.321 | 99.99th=[68682] 00:19:39.321 bw ( KiB/s): min=25453, max=54168, per=97.55%, avg=40320.15, stdev=10059.62, samples=13 00:19:39.321 iops : min= 6363, max=13542, avg=10080.00, stdev=2514.94, samples=13 00:19:39.321 lat (usec) : 500=0.01%, 750=0.77%, 1000=5.71% 00:19:39.321 lat (msec) : 2=13.37%, 4=1.17%, 10=1.28%, 20=60.49%, 50=17.05% 00:19:39.321 lat (msec) : 100=0.15% 00:19:39.321 cpu : usr=98.78%, sys=0.21%, ctx=31, majf=0, minf=5565 00:19:39.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:39.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.321 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:39.321 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:39.321 00:19:39.321 Run status group 0 (all jobs): 00:19:39.321 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=255MiB (267MB), run=8305-8305msec 00:19:39.321 WRITE: bw=40.4MiB/s (42.3MB/s), 40.4MiB/s-40.4MiB/s (42.3MB/s-42.3MB/s), io=256MiB (268MB), run=6342-6342msec 00:19:39.321 ----------------------------------------------------- 00:19:39.321 Suppressions used: 00:19:39.322 count bytes template 00:19:39.322 1 5 /usr/src/fio/parse.c 00:19:39.322 2 192 /usr/src/fio/iolog.c 00:19:39.322 1 8 libtcmalloc_minimal.so 00:19:39.322 1 904 libcrypto.so 00:19:39.322 ----------------------------------------------------- 00:19:39.322 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:39.322 Remove shared memory files 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:39.322 20:46:56 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:39.583 20:46:56 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57110 /dev/shm/spdk_tgt_trace.pid73979 00:19:39.583 20:46:56 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:39.583 20:46:56 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:39.583 00:19:39.583 real 1m8.707s 00:19:39.583 user 2m28.117s 00:19:39.583 sys 0m3.117s 00:19:39.583 20:46:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.583 ************************************ 00:19:39.583 END TEST ftl_fio_basic 00:19:39.583 20:46:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:39.583 ************************************ 00:19:39.583 20:46:56 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:39.583 20:46:56 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:39.583 20:46:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.583 20:46:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:39.583 ************************************ 00:19:39.583 START TEST ftl_bdevperf 00:19:39.583 ************************************ 00:19:39.583 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:39.583 * Looking for test storage... 00:19:39.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.583 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:39.583 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:39.583 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.584 --rc genhtml_branch_coverage=1 00:19:39.584 --rc genhtml_function_coverage=1 00:19:39.584 --rc genhtml_legend=1 00:19:39.584 --rc geninfo_all_blocks=1 00:19:39.584 --rc geninfo_unexecuted_blocks=1 00:19:39.584 00:19:39.584 ' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.584 --rc genhtml_branch_coverage=1 00:19:39.584 --rc genhtml_function_coverage=1 00:19:39.584 --rc genhtml_legend=1 00:19:39.584 --rc geninfo_all_blocks=1 00:19:39.584 --rc geninfo_unexecuted_blocks=1 00:19:39.584 00:19:39.584 ' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.584 --rc genhtml_branch_coverage=1 00:19:39.584 --rc genhtml_function_coverage=1 00:19:39.584 --rc genhtml_legend=1 00:19:39.584 --rc geninfo_all_blocks=1 00:19:39.584 --rc geninfo_unexecuted_blocks=1 00:19:39.584 00:19:39.584 ' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.584 --rc genhtml_branch_coverage=1 00:19:39.584 --rc genhtml_function_coverage=1 00:19:39.584 --rc genhtml_legend=1 00:19:39.584 --rc geninfo_all_blocks=1 00:19:39.584 --rc geninfo_unexecuted_blocks=1 00:19:39.584 00:19:39.584 ' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:39.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75945 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75945 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75945 ']' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:39.584 20:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:39.846 [2024-12-06 20:46:56.766523] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:19:39.846 [2024-12-06 20:46:56.767257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75945 ] 00:19:39.846 [2024-12-06 20:46:56.937932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.107 [2024-12-06 20:46:57.059839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:40.679 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:40.940 20:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:41.200 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:41.200 { 00:19:41.200 "name": "nvme0n1", 00:19:41.200 "aliases": [ 00:19:41.200 "083f2128-6aac-4c2b-9e67-a65ff481009f" 00:19:41.200 ], 00:19:41.200 "product_name": "NVMe disk", 00:19:41.200 "block_size": 4096, 00:19:41.200 "num_blocks": 1310720, 00:19:41.200 "uuid": "083f2128-6aac-4c2b-9e67-a65ff481009f", 00:19:41.200 "numa_id": -1, 00:19:41.200 "assigned_rate_limits": { 00:19:41.200 "rw_ios_per_sec": 0, 00:19:41.201 "rw_mbytes_per_sec": 0, 00:19:41.201 "r_mbytes_per_sec": 0, 00:19:41.201 "w_mbytes_per_sec": 0 00:19:41.201 }, 00:19:41.201 "claimed": true, 00:19:41.201 "claim_type": "read_many_write_one", 00:19:41.201 "zoned": false, 00:19:41.201 "supported_io_types": { 00:19:41.201 "read": true, 00:19:41.201 "write": true, 00:19:41.201 "unmap": true, 00:19:41.201 "flush": true, 00:19:41.201 "reset": true, 00:19:41.201 "nvme_admin": true, 00:19:41.201 "nvme_io": true, 00:19:41.201 "nvme_io_md": false, 00:19:41.201 "write_zeroes": true, 00:19:41.201 "zcopy": false, 00:19:41.201 "get_zone_info": false, 00:19:41.201 "zone_management": false, 00:19:41.201 "zone_append": false, 00:19:41.201 "compare": true, 00:19:41.201 "compare_and_write": false, 00:19:41.201 "abort": true, 00:19:41.201 "seek_hole": false, 00:19:41.201 "seek_data": false, 00:19:41.201 "copy": true, 00:19:41.201 "nvme_iov_md": false 00:19:41.201 }, 00:19:41.201 "driver_specific": { 00:19:41.201 "nvme": [ 00:19:41.201 { 00:19:41.201 "pci_address": "0000:00:11.0", 00:19:41.201 "trid": { 00:19:41.201 "trtype": "PCIe", 00:19:41.201 "traddr": "0000:00:11.0" 00:19:41.201 }, 00:19:41.201 "ctrlr_data": { 00:19:41.201 "cntlid": 0, 00:19:41.201 "vendor_id": "0x1b36", 00:19:41.201 "model_number": "QEMU NVMe Ctrl", 00:19:41.201 "serial_number": "12341", 00:19:41.201 "firmware_revision": "8.0.0", 00:19:41.201 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:41.201 "oacs": { 00:19:41.201 "security": 0, 00:19:41.201 "format": 1, 00:19:41.201 "firmware": 0, 00:19:41.201 "ns_manage": 1 00:19:41.201 }, 00:19:41.201 "multi_ctrlr": false, 00:19:41.201 "ana_reporting": false 00:19:41.201 }, 00:19:41.201 "vs": { 00:19:41.201 "nvme_version": "1.4" 00:19:41.201 }, 00:19:41.201 "ns_data": { 00:19:41.201 "id": 1, 00:19:41.201 "can_share": false 00:19:41.201 } 00:19:41.201 } 00:19:41.201 ], 00:19:41.201 "mp_policy": "active_passive" 00:19:41.201 } 00:19:41.201 } 00:19:41.201 ]' 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:41.201 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:41.463 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c5609759-bbd3-4f9d-b220-5bb79a1fd3fa 00:19:41.463 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:41.463 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5609759-bbd3-4f9d-b220-5bb79a1fd3fa 00:19:41.724 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:41.983 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6b71aa1c-bc00-41d5-a4e5-3a14309ca848 00:19:41.983 20:46:58 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6b71aa1c-bc00-41d5-a4e5-3a14309ca848 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:41.983 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:42.242 { 00:19:42.242 "name": "dc32eb23-0768-4ef6-a400-d69e1fa94121", 00:19:42.242 "aliases": [ 00:19:42.242 "lvs/nvme0n1p0" 00:19:42.242 ], 00:19:42.242 "product_name": "Logical Volume", 00:19:42.242 "block_size": 4096, 00:19:42.242 "num_blocks": 26476544, 00:19:42.242 "uuid": "dc32eb23-0768-4ef6-a400-d69e1fa94121", 00:19:42.242 "assigned_rate_limits": { 00:19:42.242 "rw_ios_per_sec": 0, 00:19:42.242 "rw_mbytes_per_sec": 0, 00:19:42.242 "r_mbytes_per_sec": 0, 00:19:42.242 "w_mbytes_per_sec": 0 00:19:42.242 }, 00:19:42.242 "claimed": false, 00:19:42.242 "zoned": false, 00:19:42.242 "supported_io_types": { 00:19:42.242 "read": true, 00:19:42.242 "write": true, 00:19:42.242 "unmap": true, 00:19:42.242 "flush": false, 00:19:42.242 "reset": true, 00:19:42.242 "nvme_admin": false, 00:19:42.242 "nvme_io": false, 00:19:42.242 "nvme_io_md": false, 00:19:42.242 "write_zeroes": true, 00:19:42.242 "zcopy": false, 00:19:42.242 "get_zone_info": false, 00:19:42.242 "zone_management": false, 00:19:42.242 "zone_append": false, 00:19:42.242 "compare": false, 00:19:42.242 "compare_and_write": false, 00:19:42.242 "abort": false, 00:19:42.242 "seek_hole": true, 00:19:42.242 "seek_data": true, 00:19:42.242 "copy": false, 00:19:42.242 "nvme_iov_md": false 00:19:42.242 }, 00:19:42.242 "driver_specific": { 00:19:42.242 "lvol": { 00:19:42.242 "lvol_store_uuid": "6b71aa1c-bc00-41d5-a4e5-3a14309ca848", 00:19:42.242 "base_bdev": "nvme0n1", 00:19:42.242 "thin_provision": true, 00:19:42.242 "num_allocated_clusters": 0, 00:19:42.242 "snapshot": false, 00:19:42.242 "clone": false, 00:19:42.242 "esnap_clone": false 00:19:42.242 } 00:19:42.242 } 00:19:42.242 } 00:19:42.242 ]' 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:42.242 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:42.501 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:42.760 { 00:19:42.760 "name": "dc32eb23-0768-4ef6-a400-d69e1fa94121", 00:19:42.760 "aliases": [ 00:19:42.760 "lvs/nvme0n1p0" 00:19:42.760 ], 00:19:42.760 "product_name": "Logical Volume", 00:19:42.760 "block_size": 4096, 00:19:42.760 "num_blocks": 26476544, 00:19:42.760 "uuid": "dc32eb23-0768-4ef6-a400-d69e1fa94121", 00:19:42.760 "assigned_rate_limits": { 00:19:42.760 "rw_ios_per_sec": 0, 00:19:42.760 "rw_mbytes_per_sec": 0, 00:19:42.760 "r_mbytes_per_sec": 0, 00:19:42.760 "w_mbytes_per_sec": 0 00:19:42.760 }, 00:19:42.760 "claimed": false, 00:19:42.760 "zoned": false, 00:19:42.760 "supported_io_types": { 00:19:42.760 "read": true, 00:19:42.760 "write": true, 00:19:42.760 "unmap": true, 00:19:42.760 "flush": false, 00:19:42.760 "reset": true, 00:19:42.760 "nvme_admin": false, 00:19:42.760 "nvme_io": false, 00:19:42.760 "nvme_io_md": false, 00:19:42.760 "write_zeroes": true, 00:19:42.760 "zcopy": false, 00:19:42.760 "get_zone_info": false, 00:19:42.760 "zone_management": false, 00:19:42.760 "zone_append": false, 00:19:42.760 "compare": false, 00:19:42.760 "compare_and_write": false, 00:19:42.760 "abort": false, 00:19:42.760 "seek_hole": true, 00:19:42.760 "seek_data": true, 00:19:42.760 "copy": false, 00:19:42.760 "nvme_iov_md": false 00:19:42.760 }, 00:19:42.760 "driver_specific": { 00:19:42.760 "lvol": { 00:19:42.760 "lvol_store_uuid": "6b71aa1c-bc00-41d5-a4e5-3a14309ca848", 00:19:42.760 "base_bdev": "nvme0n1", 00:19:42.760 "thin_provision": true, 00:19:42.760 "num_allocated_clusters": 0, 00:19:42.760 "snapshot": false, 00:19:42.760 "clone": false, 00:19:42.760 "esnap_clone": false 00:19:42.760 } 00:19:42.760 } 00:19:42.760 } 00:19:42.760 ]' 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:42.760 20:46:59 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:43.018 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc32eb23-0768-4ef6-a400-d69e1fa94121 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:43.279 { 00:19:43.279 "name": "dc32eb23-0768-4ef6-a400-d69e1fa94121", 00:19:43.279 "aliases": [ 00:19:43.279 "lvs/nvme0n1p0" 00:19:43.279 ], 00:19:43.279 "product_name": "Logical Volume", 00:19:43.279 "block_size": 4096, 00:19:43.279 "num_blocks": 26476544, 00:19:43.279 "uuid": "dc32eb23-0768-4ef6-a400-d69e1fa94121", 00:19:43.279 "assigned_rate_limits": { 00:19:43.279 "rw_ios_per_sec": 0, 00:19:43.279 "rw_mbytes_per_sec": 0, 00:19:43.279 "r_mbytes_per_sec": 0, 00:19:43.279 "w_mbytes_per_sec": 0 00:19:43.279 }, 00:19:43.279 "claimed": false, 00:19:43.279 "zoned": false, 00:19:43.279 "supported_io_types": { 00:19:43.279 "read": true, 00:19:43.279 "write": true, 00:19:43.279 "unmap": true, 00:19:43.279 "flush": false, 00:19:43.279 "reset": true, 00:19:43.279 "nvme_admin": false, 00:19:43.279 "nvme_io": false, 00:19:43.279 "nvme_io_md": false, 00:19:43.279 "write_zeroes": true, 00:19:43.279 "zcopy": false, 00:19:43.279 "get_zone_info": false, 00:19:43.279 "zone_management": false, 00:19:43.279 "zone_append": false, 00:19:43.279 "compare": false, 00:19:43.279 "compare_and_write": false, 00:19:43.279 "abort": false, 00:19:43.279 "seek_hole": true, 00:19:43.279 "seek_data": true, 00:19:43.279 "copy": false, 00:19:43.279 "nvme_iov_md": false 00:19:43.279 }, 00:19:43.279 "driver_specific": { 00:19:43.279 "lvol": { 00:19:43.279 "lvol_store_uuid": "6b71aa1c-bc00-41d5-a4e5-3a14309ca848", 00:19:43.279 "base_bdev": "nvme0n1", 00:19:43.279 "thin_provision": true, 00:19:43.279 "num_allocated_clusters": 0, 00:19:43.279 "snapshot": false, 00:19:43.279 "clone": false, 00:19:43.279 "esnap_clone": false 00:19:43.279 } 00:19:43.279 } 00:19:43.279 } 00:19:43.279 ]' 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:43.279 20:47:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dc32eb23-0768-4ef6-a400-d69e1fa94121 -c nvc0n1p0 --l2p_dram_limit 20 00:19:43.541 [2024-12-06 20:47:00.541964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.542094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.541 [2024-12-06 20:47:00.542145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.541 [2024-12-06 20:47:00.542166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.542235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.542257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.541 [2024-12-06 20:47:00.542273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:43.541 [2024-12-06 20:47:00.542289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.542313] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.541 [2024-12-06 20:47:00.542936] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.541 [2024-12-06 20:47:00.543015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.543055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.541 [2024-12-06 20:47:00.543115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:19:43.541 [2024-12-06 20:47:00.543134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.543256] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7c378c29-ca87-4a47-a991-902d0863b67a 00:19:43.541 [2024-12-06 20:47:00.544269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.544300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:43.541 [2024-12-06 20:47:00.544312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:43.541 [2024-12-06 20:47:00.544318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.549085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.549177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.541 [2024-12-06 20:47:00.549190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:19:43.541 [2024-12-06 20:47:00.549199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.549270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.549277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.541 [2024-12-06 20:47:00.549288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:43.541 [2024-12-06 20:47:00.549293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.549333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.541 [2024-12-06 20:47:00.549340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.541 [2024-12-06 20:47:00.549348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:43.541 [2024-12-06 20:47:00.549354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.541 [2024-12-06 20:47:00.549372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.542 [2024-12-06 20:47:00.552242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.542 [2024-12-06 20:47:00.552345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.542 [2024-12-06 20:47:00.552357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:19:43.542 [2024-12-06 20:47:00.552367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.542 [2024-12-06 20:47:00.552394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.542 [2024-12-06 20:47:00.552402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.542 [2024-12-06 20:47:00.552408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:43.542 [2024-12-06 20:47:00.552416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.542 [2024-12-06 20:47:00.552427] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:43.542 [2024-12-06 20:47:00.552539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:43.542 [2024-12-06 20:47:00.552548] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.542 [2024-12-06 20:47:00.552557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:43.542 [2024-12-06 20:47:00.552565] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552575] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552581] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:43.542 [2024-12-06 20:47:00.552588] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.542 [2024-12-06 20:47:00.552593] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:43.542 [2024-12-06 20:47:00.552600] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:43.542 [2024-12-06 20:47:00.552607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.542 [2024-12-06 20:47:00.552614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.542 [2024-12-06 20:47:00.552620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:43.542 [2024-12-06 20:47:00.552627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.542 [2024-12-06 20:47:00.552690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.542 [2024-12-06 20:47:00.552698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.542 [2024-12-06 20:47:00.552704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:43.542 [2024-12-06 20:47:00.552712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.542 [2024-12-06 20:47:00.552780] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.542 [2024-12-06 20:47:00.552790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.542 [2024-12-06 20:47:00.552796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.542 [2024-12-06 20:47:00.552816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.542 [2024-12-06 20:47:00.552833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.542 [2024-12-06 20:47:00.552845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.542 [2024-12-06 20:47:00.552856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:43.542 [2024-12-06 20:47:00.552862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.542 [2024-12-06 20:47:00.552869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.542 [2024-12-06 20:47:00.552875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:43.542 [2024-12-06 20:47:00.552883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.542 [2024-12-06 20:47:00.552905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.542 [2024-12-06 20:47:00.552922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.542 [2024-12-06 20:47:00.552940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.542 [2024-12-06 20:47:00.552956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.542 [2024-12-06 20:47:00.552973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.542 [2024-12-06 20:47:00.552986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.542 [2024-12-06 20:47:00.552991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:43.542 [2024-12-06 20:47:00.552999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.542 [2024-12-06 20:47:00.553004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.542 [2024-12-06 20:47:00.553011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:43.542 [2024-12-06 20:47:00.553015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.542 [2024-12-06 20:47:00.553021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:43.542 [2024-12-06 20:47:00.553027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:43.542 [2024-12-06 20:47:00.553033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.553038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:43.542 [2024-12-06 20:47:00.553044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:43.542 [2024-12-06 20:47:00.553049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.553056] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.542 [2024-12-06 20:47:00.553062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.542 [2024-12-06 20:47:00.553070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.542 [2024-12-06 20:47:00.553077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.542 [2024-12-06 20:47:00.553085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.542 [2024-12-06 20:47:00.553090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.542 [2024-12-06 20:47:00.553096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.542 [2024-12-06 20:47:00.553102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.542 [2024-12-06 20:47:00.553108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.542 [2024-12-06 20:47:00.553113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.542 [2024-12-06 20:47:00.553120] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.542 [2024-12-06 20:47:00.553127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.542 [2024-12-06 20:47:00.553135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:43.542 [2024-12-06 20:47:00.553140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:43.542 [2024-12-06 20:47:00.553147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:43.542 [2024-12-06 20:47:00.553152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:43.542 [2024-12-06 20:47:00.553159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:43.542 [2024-12-06 20:47:00.553164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:43.542 [2024-12-06 20:47:00.553172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:43.542 [2024-12-06 20:47:00.553177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:43.542 [2024-12-06 20:47:00.553185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:43.542 [2024-12-06 20:47:00.553190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:43.542 [2024-12-06 20:47:00.553196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:43.542 [2024-12-06 20:47:00.553202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:43.542 [2024-12-06 20:47:00.553209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:43.542 [2024-12-06 20:47:00.553214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:43.542 [2024-12-06 20:47:00.553221] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.542 [2024-12-06 20:47:00.553227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.542 [2024-12-06 20:47:00.553235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.543 [2024-12-06 20:47:00.553240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.543 [2024-12-06 20:47:00.553247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.543 [2024-12-06 20:47:00.553252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.543 [2024-12-06 20:47:00.553259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.543 [2024-12-06 20:47:00.553265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.543 [2024-12-06 20:47:00.553272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:43.543 [2024-12-06 20:47:00.553279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.543 [2024-12-06 20:47:00.553317] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:43.543 [2024-12-06 20:47:00.553325] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:47.741 [2024-12-06 20:47:04.163817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.163883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:47.741 [2024-12-06 20:47:04.163911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3610.488 ms 00:19:47.741 [2024-12-06 20:47:04.163921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.189900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.189943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.741 [2024-12-06 20:47:04.189957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.761 ms 00:19:47.741 [2024-12-06 20:47:04.189965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.190086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.190096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:47.741 [2024-12-06 20:47:04.190109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:47.741 [2024-12-06 20:47:04.190116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.232971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.233017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.741 [2024-12-06 20:47:04.233031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.806 ms 00:19:47.741 [2024-12-06 20:47:04.233040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.233079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.233088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.741 [2024-12-06 20:47:04.233099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:47.741 [2024-12-06 20:47:04.233108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.233485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.233503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.741 [2024-12-06 20:47:04.233514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:19:47.741 [2024-12-06 20:47:04.233521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.233629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.233638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.741 [2024-12-06 20:47:04.233649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:47.741 [2024-12-06 20:47:04.233657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.247052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.741 [2024-12-06 20:47:04.247084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.741 [2024-12-06 20:47:04.247096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.376 ms 00:19:47.741 [2024-12-06 20:47:04.247111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.741 [2024-12-06 20:47:04.258394] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:47.741 [2024-12-06 20:47:04.263461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.263495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:47.742 [2024-12-06 20:47:04.263505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.284 ms 00:19:47.742 [2024-12-06 20:47:04.263514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.342723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.342780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:47.742 [2024-12-06 20:47:04.342792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.187 ms 00:19:47.742 [2024-12-06 20:47:04.342802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.342991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.343006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:47.742 [2024-12-06 20:47:04.343015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:47.742 [2024-12-06 20:47:04.343027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.367021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.367061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:47.742 [2024-12-06 20:47:04.367073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.964 ms 00:19:47.742 [2024-12-06 20:47:04.367083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.390794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.390835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:47.742 [2024-12-06 20:47:04.390847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.676 ms 00:19:47.742 [2024-12-06 20:47:04.390856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.391435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.391458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:47.742 [2024-12-06 20:47:04.391469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:47.742 [2024-12-06 20:47:04.391478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.467837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.467878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:47.742 [2024-12-06 20:47:04.467906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.309 ms 00:19:47.742 [2024-12-06 20:47:04.467917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.493138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.493285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:47.742 [2024-12-06 20:47:04.493305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.156 ms 00:19:47.742 [2024-12-06 20:47:04.493315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.517310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.517352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:47.742 [2024-12-06 20:47:04.517364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.753 ms 00:19:47.742 [2024-12-06 20:47:04.517374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.541523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.541562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:47.742 [2024-12-06 20:47:04.541574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.115 ms 00:19:47.742 [2024-12-06 20:47:04.541583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.541619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.541633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:47.742 [2024-12-06 20:47:04.541641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:47.742 [2024-12-06 20:47:04.541651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.541722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.742 [2024-12-06 20:47:04.541735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:47.742 [2024-12-06 20:47:04.541744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:47.742 [2024-12-06 20:47:04.541752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.742 [2024-12-06 20:47:04.542571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4000.201 ms, result 0 00:19:47.742 { 00:19:47.742 "name": "ftl0", 00:19:47.742 "uuid": "7c378c29-ca87-4a47-a991-902d0863b67a" 00:19:47.742 } 00:19:47.742 20:47:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:47.742 20:47:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:47.742 20:47:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:47.742 20:47:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:47.742 [2024-12-06 20:47:04.850954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:47.742 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:47.742 Zero copy mechanism will not be used. 00:19:47.742 Running I/O for 4 seconds... 00:19:50.077 1386.00 IOPS, 92.04 MiB/s [2024-12-06T20:47:08.144Z] 1264.50 IOPS, 83.97 MiB/s [2024-12-06T20:47:09.079Z] 1302.67 IOPS, 86.51 MiB/s [2024-12-06T20:47:09.079Z] 1350.00 IOPS, 89.65 MiB/s 00:19:51.946 Latency(us) 00:19:51.946 [2024-12-06T20:47:09.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.946 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:51.946 ftl0 : 4.00 1349.15 89.59 0.00 0.00 780.09 191.41 3226.39 00:19:51.946 [2024-12-06T20:47:09.080Z] =================================================================================================================== 00:19:51.947 [2024-12-06T20:47:09.080Z] Total : 1349.15 89.59 0.00 0.00 780.09 191.41 3226.39 00:19:51.947 { 00:19:51.947 "results": [ 00:19:51.947 { 00:19:51.947 "job": "ftl0", 00:19:51.947 "core_mask": "0x1", 00:19:51.947 "workload": "randwrite", 00:19:51.947 "status": "finished", 00:19:51.947 "queue_depth": 1, 00:19:51.947 "io_size": 69632, 00:19:51.947 "runtime": 4.003266, 00:19:51.947 "iops": 1349.148420314813, 00:19:51.947 "mibps": 89.59188728653055, 00:19:51.947 "io_failed": 0, 00:19:51.947 "io_timeout": 0, 00:19:51.947 "avg_latency_us": 780.0921196929344, 00:19:51.947 "min_latency_us": 191.40923076923076, 00:19:51.947 "max_latency_us": 3226.3876923076923 00:19:51.947 } 00:19:51.947 ], 00:19:51.947 "core_count": 1 00:19:51.947 } 00:19:51.947 [2024-12-06 20:47:08.862631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:51.947 20:47:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:51.947 [2024-12-06 20:47:08.961572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:51.947 Running I/O for 4 seconds... 00:19:54.267 6549.00 IOPS, 25.58 MiB/s [2024-12-06T20:47:12.334Z] 5915.50 IOPS, 23.11 MiB/s [2024-12-06T20:47:13.269Z] 5745.00 IOPS, 22.44 MiB/s [2024-12-06T20:47:13.269Z] 5617.75 IOPS, 21.94 MiB/s 00:19:56.136 Latency(us) 00:19:56.136 [2024-12-06T20:47:13.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.136 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:56.136 ftl0 : 4.04 5600.01 21.88 0.00 0.00 22765.77 283.57 43959.53 00:19:56.136 [2024-12-06T20:47:13.269Z] =================================================================================================================== 00:19:56.136 [2024-12-06T20:47:13.269Z] Total : 5600.01 21.88 0.00 0.00 22765.77 0.00 43959.53 00:19:56.136 { 00:19:56.136 "results": [ 00:19:56.136 { 00:19:56.136 "job": "ftl0", 00:19:56.136 "core_mask": "0x1", 00:19:56.136 "workload": "randwrite", 00:19:56.136 "status": "finished", 00:19:56.136 "queue_depth": 128, 00:19:56.136 "io_size": 4096, 00:19:56.136 "runtime": 4.03553, 00:19:56.136 "iops": 5600.007929565633, 00:19:56.136 "mibps": 21.875030974865755, 00:19:56.136 "io_failed": 0, 00:19:56.136 "io_timeout": 0, 00:19:56.136 "avg_latency_us": 22765.774921286513, 00:19:56.136 "min_latency_us": 283.5692307692308, 00:19:56.136 "max_latency_us": 43959.53230769231 00:19:56.136 } 00:19:56.136 ], 00:19:56.136 "core_count": 1 00:19:56.136 } 00:19:56.136 [2024-12-06 20:47:13.006455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:56.136 20:47:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:56.136 [2024-12-06 20:47:13.110947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:56.136 Running I/O for 4 seconds... 00:19:57.997 4860.00 IOPS, 18.98 MiB/s [2024-12-06T20:47:16.503Z] 4936.00 IOPS, 19.28 MiB/s [2024-12-06T20:47:17.438Z] 4884.33 IOPS, 19.08 MiB/s [2024-12-06T20:47:17.438Z] 4849.75 IOPS, 18.94 MiB/s 00:20:00.305 Latency(us) 00:20:00.305 [2024-12-06T20:47:17.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.305 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:00.305 Verification LBA range: start 0x0 length 0x1400000 00:20:00.305 ftl0 : 4.01 4866.01 19.01 0.00 0.00 26235.44 415.90 73803.62 00:20:00.305 [2024-12-06T20:47:17.438Z] =================================================================================================================== 00:20:00.305 [2024-12-06T20:47:17.438Z] Total : 4866.01 19.01 0.00 0.00 26235.44 0.00 73803.62 00:20:00.305 { 00:20:00.305 "results": [ 00:20:00.305 { 00:20:00.305 "job": "ftl0", 00:20:00.305 "core_mask": "0x1", 00:20:00.305 "workload": "verify", 00:20:00.305 "status": "finished", 00:20:00.305 "verify_range": { 00:20:00.305 "start": 0, 00:20:00.305 "length": 20971520 00:20:00.305 }, 00:20:00.305 "queue_depth": 128, 00:20:00.305 "io_size": 4096, 00:20:00.305 "runtime": 4.01294, 00:20:00.305 "iops": 4866.008462623413, 00:20:00.305 "mibps": 19.007845557122707, 00:20:00.305 "io_failed": 0, 00:20:00.305 "io_timeout": 0, 00:20:00.305 "avg_latency_us": 26235.4390933264, 00:20:00.305 "min_latency_us": 415.90153846153845, 00:20:00.305 "max_latency_us": 73803.61846153846 00:20:00.305 } 00:20:00.305 ], 00:20:00.305 "core_count": 1 00:20:00.305 } 00:20:00.305 [2024-12-06 20:47:17.138480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:00.305 20:47:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:00.305 [2024-12-06 20:47:17.392422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.305 [2024-12-06 20:47:17.392468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:00.305 [2024-12-06 20:47:17.392482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.305 [2024-12-06 20:47:17.392491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.305 [2024-12-06 20:47:17.392512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.305 [2024-12-06 20:47:17.395114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.305 [2024-12-06 20:47:17.395141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:00.305 [2024-12-06 20:47:17.395153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:20:00.305 [2024-12-06 20:47:17.395161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.305 [2024-12-06 20:47:17.397652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.305 [2024-12-06 20:47:17.397682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:00.305 [2024-12-06 20:47:17.397699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.468 ms 00:20:00.305 [2024-12-06 20:47:17.397708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.590150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.590193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:00.564 [2024-12-06 20:47:17.590211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 192.420 ms 00:20:00.564 [2024-12-06 20:47:17.590219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.596384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.596517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:00.564 [2024-12-06 20:47:17.596538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:20:00.564 [2024-12-06 20:47:17.596549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.620919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.620953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:00.564 [2024-12-06 20:47:17.620966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.312 ms 00:20:00.564 [2024-12-06 20:47:17.620973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.636066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.636101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:00.564 [2024-12-06 20:47:17.636114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.057 ms 00:20:00.564 [2024-12-06 20:47:17.636122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.636286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.636298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:00.564 [2024-12-06 20:47:17.636310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:00.564 [2024-12-06 20:47:17.636317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.659900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.659929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:00.564 [2024-12-06 20:47:17.659941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.565 ms 00:20:00.564 [2024-12-06 20:47:17.659948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.564 [2024-12-06 20:47:17.682903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.564 [2024-12-06 20:47:17.683034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:00.564 [2024-12-06 20:47:17.683054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:20:00.564 [2024-12-06 20:47:17.683061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.824 [2024-12-06 20:47:17.705542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.824 [2024-12-06 20:47:17.705571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:00.824 [2024-12-06 20:47:17.705583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.449 ms 00:20:00.824 [2024-12-06 20:47:17.705590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.824 [2024-12-06 20:47:17.728267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.824 [2024-12-06 20:47:17.728297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:00.824 [2024-12-06 20:47:17.728310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.612 ms 00:20:00.824 [2024-12-06 20:47:17.728317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.824 [2024-12-06 20:47:17.728350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:00.824 [2024-12-06 20:47:17.728364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:00.824 [2024-12-06 20:47:17.728717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.728995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:00.825 [2024-12-06 20:47:17.729263] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:00.825 [2024-12-06 20:47:17.729272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c378c29-ca87-4a47-a991-902d0863b67a 00:20:00.825 [2024-12-06 20:47:17.729281] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:00.825 [2024-12-06 20:47:17.729290] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:00.825 [2024-12-06 20:47:17.729297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:00.825 [2024-12-06 20:47:17.729306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:00.825 [2024-12-06 20:47:17.729313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:00.825 [2024-12-06 20:47:17.729322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:00.825 [2024-12-06 20:47:17.729329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:00.825 [2024-12-06 20:47:17.729338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:00.825 [2024-12-06 20:47:17.729344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:00.825 [2024-12-06 20:47:17.729353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.825 [2024-12-06 20:47:17.729361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:00.825 [2024-12-06 20:47:17.729371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:20:00.825 [2024-12-06 20:47:17.729378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.741756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.825 [2024-12-06 20:47:17.741786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:00.825 [2024-12-06 20:47:17.741798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.347 ms 00:20:00.825 [2024-12-06 20:47:17.741805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.742175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.825 [2024-12-06 20:47:17.742185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:00.825 [2024-12-06 20:47:17.742194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:00.825 [2024-12-06 20:47:17.742202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.777040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.825 [2024-12-06 20:47:17.777072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.825 [2024-12-06 20:47:17.777086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.825 [2024-12-06 20:47:17.777093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.777146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.825 [2024-12-06 20:47:17.777154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.825 [2024-12-06 20:47:17.777163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.825 [2024-12-06 20:47:17.777170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.777236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.825 [2024-12-06 20:47:17.777246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.825 [2024-12-06 20:47:17.777256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.825 [2024-12-06 20:47:17.777263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.777279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.825 [2024-12-06 20:47:17.777286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.825 [2024-12-06 20:47:17.777295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.825 [2024-12-06 20:47:17.777302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.825 [2024-12-06 20:47:17.852799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.825 [2024-12-06 20:47:17.852990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.826 [2024-12-06 20:47:17.853011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.853019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.914954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.914996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.826 [2024-12-06 20:47:17.915009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.915110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.826 [2024-12-06 20:47:17.915120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.915178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.826 [2024-12-06 20:47:17.915188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.915291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.826 [2024-12-06 20:47:17.915302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.915347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:00.826 [2024-12-06 20:47:17.915357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.915407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.826 [2024-12-06 20:47:17.915417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.826 [2024-12-06 20:47:17.915479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.826 [2024-12-06 20:47:17.915489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.826 [2024-12-06 20:47:17.915496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.826 [2024-12-06 20:47:17.915608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.154 ms, result 0 00:20:00.826 true 00:20:00.826 20:47:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75945 00:20:00.826 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75945 ']' 00:20:00.826 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75945 00:20:00.826 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:00.826 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.826 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75945 00:20:01.084 killing process with pid 75945 00:20:01.084 Received shutdown signal, test time was about 4.000000 seconds 00:20:01.084 00:20:01.084 Latency(us) 00:20:01.084 [2024-12-06T20:47:18.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.084 [2024-12-06T20:47:18.217Z] =================================================================================================================== 00:20:01.084 [2024-12-06T20:47:18.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.084 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.084 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.084 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75945' 00:20:01.084 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75945 00:20:01.084 20:47:17 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75945 00:20:01.651 Remove shared memory files 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:01.651 ************************************ 00:20:01.651 END TEST ftl_bdevperf 00:20:01.651 ************************************ 00:20:01.651 00:20:01.651 real 0m22.187s 00:20:01.651 user 0m24.855s 00:20:01.651 sys 0m0.933s 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.651 20:47:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:01.651 20:47:18 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:01.651 20:47:18 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:01.651 20:47:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.651 20:47:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:01.651 ************************************ 00:20:01.651 START TEST ftl_trim 00:20:01.651 ************************************ 00:20:01.651 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:01.910 * Looking for test storage... 00:20:01.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.910 20:47:18 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.911 --rc geninfo_unexecuted_blocks=1 00:20:01.911 00:20:01.911 ' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76289 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:01.911 20:47:18 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76289 00:20:01.911 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76289 ']' 00:20:01.911 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.911 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.911 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.911 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.911 20:47:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:01.911 [2024-12-06 20:47:19.011557] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:01.911 [2024-12-06 20:47:19.011785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76289 ] 00:20:02.169 [2024-12-06 20:47:19.174683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:02.169 [2024-12-06 20:47:19.278641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.169 [2024-12-06 20:47:19.278853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.169 [2024-12-06 20:47:19.278855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.737 20:47:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.737 20:47:19 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:02.737 20:47:19 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:02.737 20:47:19 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:02.737 20:47:19 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:02.737 20:47:19 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:02.737 20:47:19 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:02.996 20:47:19 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:03.255 20:47:20 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:03.255 20:47:20 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:03.255 20:47:20 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:03.255 { 00:20:03.255 "name": "nvme0n1", 00:20:03.255 "aliases": [ 00:20:03.255 "dac41ad7-69b6-41c9-87a9-be3bfa9f8a08" 00:20:03.255 ], 00:20:03.255 "product_name": "NVMe disk", 00:20:03.255 "block_size": 4096, 00:20:03.255 "num_blocks": 1310720, 00:20:03.255 "uuid": "dac41ad7-69b6-41c9-87a9-be3bfa9f8a08", 00:20:03.255 "numa_id": -1, 00:20:03.255 "assigned_rate_limits": { 00:20:03.255 "rw_ios_per_sec": 0, 00:20:03.255 "rw_mbytes_per_sec": 0, 00:20:03.255 "r_mbytes_per_sec": 0, 00:20:03.255 "w_mbytes_per_sec": 0 00:20:03.255 }, 00:20:03.255 "claimed": true, 00:20:03.255 "claim_type": "read_many_write_one", 00:20:03.255 "zoned": false, 00:20:03.255 "supported_io_types": { 00:20:03.255 "read": true, 00:20:03.255 "write": true, 00:20:03.255 "unmap": true, 00:20:03.255 "flush": true, 00:20:03.255 "reset": true, 00:20:03.255 "nvme_admin": true, 00:20:03.255 "nvme_io": true, 00:20:03.255 "nvme_io_md": false, 00:20:03.255 "write_zeroes": true, 00:20:03.255 "zcopy": false, 00:20:03.255 "get_zone_info": false, 00:20:03.255 "zone_management": false, 00:20:03.255 "zone_append": false, 00:20:03.255 "compare": true, 00:20:03.255 "compare_and_write": false, 00:20:03.255 "abort": true, 00:20:03.255 "seek_hole": false, 00:20:03.255 "seek_data": false, 00:20:03.255 "copy": true, 00:20:03.255 "nvme_iov_md": false 00:20:03.255 }, 00:20:03.255 "driver_specific": { 00:20:03.255 "nvme": [ 00:20:03.255 { 00:20:03.255 "pci_address": "0000:00:11.0", 00:20:03.255 "trid": { 00:20:03.255 "trtype": "PCIe", 00:20:03.255 "traddr": "0000:00:11.0" 00:20:03.255 }, 00:20:03.255 "ctrlr_data": { 00:20:03.255 "cntlid": 0, 00:20:03.255 "vendor_id": "0x1b36", 00:20:03.255 "model_number": "QEMU NVMe Ctrl", 00:20:03.255 "serial_number": "12341", 00:20:03.255 "firmware_revision": "8.0.0", 00:20:03.255 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:03.255 "oacs": { 00:20:03.255 "security": 0, 00:20:03.255 "format": 1, 00:20:03.255 "firmware": 0, 00:20:03.255 "ns_manage": 1 00:20:03.255 }, 00:20:03.255 "multi_ctrlr": false, 00:20:03.255 "ana_reporting": false 00:20:03.255 }, 00:20:03.255 "vs": { 00:20:03.255 "nvme_version": "1.4" 00:20:03.255 }, 00:20:03.255 "ns_data": { 00:20:03.255 "id": 1, 00:20:03.255 "can_share": false 00:20:03.255 } 00:20:03.255 } 00:20:03.255 ], 00:20:03.255 "mp_policy": "active_passive" 00:20:03.255 } 00:20:03.255 } 00:20:03.255 ]' 00:20:03.255 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:03.514 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:03.514 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:03.514 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:03.514 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:03.514 20:47:20 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6b71aa1c-bc00-41d5-a4e5-3a14309ca848 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:03.514 20:47:20 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b71aa1c-bc00-41d5-a4e5-3a14309ca848 00:20:03.774 20:47:20 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:04.034 20:47:21 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a27919f1-ccb0-4459-bc69-8cd3d7e88d38 00:20:04.034 20:47:21 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a27919f1-ccb0-4459-bc69-8cd3d7e88d38 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:04.296 20:47:21 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.296 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.296 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:04.296 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:04.296 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:04.296 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:04.558 { 00:20:04.558 "name": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:04.558 "aliases": [ 00:20:04.558 "lvs/nvme0n1p0" 00:20:04.558 ], 00:20:04.558 "product_name": "Logical Volume", 00:20:04.558 "block_size": 4096, 00:20:04.558 "num_blocks": 26476544, 00:20:04.558 "uuid": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:04.558 "assigned_rate_limits": { 00:20:04.558 "rw_ios_per_sec": 0, 00:20:04.558 "rw_mbytes_per_sec": 0, 00:20:04.558 "r_mbytes_per_sec": 0, 00:20:04.558 "w_mbytes_per_sec": 0 00:20:04.558 }, 00:20:04.558 "claimed": false, 00:20:04.558 "zoned": false, 00:20:04.558 "supported_io_types": { 00:20:04.558 "read": true, 00:20:04.558 "write": true, 00:20:04.558 "unmap": true, 00:20:04.558 "flush": false, 00:20:04.558 "reset": true, 00:20:04.558 "nvme_admin": false, 00:20:04.558 "nvme_io": false, 00:20:04.558 "nvme_io_md": false, 00:20:04.558 "write_zeroes": true, 00:20:04.558 "zcopy": false, 00:20:04.558 "get_zone_info": false, 00:20:04.558 "zone_management": false, 00:20:04.558 "zone_append": false, 00:20:04.558 "compare": false, 00:20:04.558 "compare_and_write": false, 00:20:04.558 "abort": false, 00:20:04.558 "seek_hole": true, 00:20:04.558 "seek_data": true, 00:20:04.558 "copy": false, 00:20:04.558 "nvme_iov_md": false 00:20:04.558 }, 00:20:04.558 "driver_specific": { 00:20:04.558 "lvol": { 00:20:04.558 "lvol_store_uuid": "a27919f1-ccb0-4459-bc69-8cd3d7e88d38", 00:20:04.558 "base_bdev": "nvme0n1", 00:20:04.558 "thin_provision": true, 00:20:04.558 "num_allocated_clusters": 0, 00:20:04.558 "snapshot": false, 00:20:04.558 "clone": false, 00:20:04.558 "esnap_clone": false 00:20:04.558 } 00:20:04.558 } 00:20:04.558 } 00:20:04.558 ]' 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:04.558 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:04.558 20:47:21 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:04.558 20:47:21 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:04.558 20:47:21 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:04.820 20:47:21 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:04.820 20:47:21 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:04.820 20:47:21 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.820 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a168c293-bbf0-42a6-9e11-154612fafb02 00:20:04.820 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:04.820 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:04.820 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:04.820 20:47:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a168c293-bbf0-42a6-9e11-154612fafb02 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:05.082 { 00:20:05.082 "name": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:05.082 "aliases": [ 00:20:05.082 "lvs/nvme0n1p0" 00:20:05.082 ], 00:20:05.082 "product_name": "Logical Volume", 00:20:05.082 "block_size": 4096, 00:20:05.082 "num_blocks": 26476544, 00:20:05.082 "uuid": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:05.082 "assigned_rate_limits": { 00:20:05.082 "rw_ios_per_sec": 0, 00:20:05.082 "rw_mbytes_per_sec": 0, 00:20:05.082 "r_mbytes_per_sec": 0, 00:20:05.082 "w_mbytes_per_sec": 0 00:20:05.082 }, 00:20:05.082 "claimed": false, 00:20:05.082 "zoned": false, 00:20:05.082 "supported_io_types": { 00:20:05.082 "read": true, 00:20:05.082 "write": true, 00:20:05.082 "unmap": true, 00:20:05.082 "flush": false, 00:20:05.082 "reset": true, 00:20:05.082 "nvme_admin": false, 00:20:05.082 "nvme_io": false, 00:20:05.082 "nvme_io_md": false, 00:20:05.082 "write_zeroes": true, 00:20:05.082 "zcopy": false, 00:20:05.082 "get_zone_info": false, 00:20:05.082 "zone_management": false, 00:20:05.082 "zone_append": false, 00:20:05.082 "compare": false, 00:20:05.082 "compare_and_write": false, 00:20:05.082 "abort": false, 00:20:05.082 "seek_hole": true, 00:20:05.082 "seek_data": true, 00:20:05.082 "copy": false, 00:20:05.082 "nvme_iov_md": false 00:20:05.082 }, 00:20:05.082 "driver_specific": { 00:20:05.082 "lvol": { 00:20:05.082 "lvol_store_uuid": "a27919f1-ccb0-4459-bc69-8cd3d7e88d38", 00:20:05.082 "base_bdev": "nvme0n1", 00:20:05.082 "thin_provision": true, 00:20:05.082 "num_allocated_clusters": 0, 00:20:05.082 "snapshot": false, 00:20:05.082 "clone": false, 00:20:05.082 "esnap_clone": false 00:20:05.082 } 00:20:05.082 } 00:20:05.082 } 00:20:05.082 ]' 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:05.082 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:05.082 20:47:22 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:05.082 20:47:22 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:05.344 20:47:22 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:05.344 20:47:22 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:05.344 20:47:22 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a168c293-bbf0-42a6-9e11-154612fafb02 00:20:05.344 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a168c293-bbf0-42a6-9e11-154612fafb02 00:20:05.344 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:05.344 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:05.344 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:05.344 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a168c293-bbf0-42a6-9e11-154612fafb02 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:05.605 { 00:20:05.605 "name": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:05.605 "aliases": [ 00:20:05.605 "lvs/nvme0n1p0" 00:20:05.605 ], 00:20:05.605 "product_name": "Logical Volume", 00:20:05.605 "block_size": 4096, 00:20:05.605 "num_blocks": 26476544, 00:20:05.605 "uuid": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:05.605 "assigned_rate_limits": { 00:20:05.605 "rw_ios_per_sec": 0, 00:20:05.605 "rw_mbytes_per_sec": 0, 00:20:05.605 "r_mbytes_per_sec": 0, 00:20:05.605 "w_mbytes_per_sec": 0 00:20:05.605 }, 00:20:05.605 "claimed": false, 00:20:05.605 "zoned": false, 00:20:05.605 "supported_io_types": { 00:20:05.605 "read": true, 00:20:05.605 "write": true, 00:20:05.605 "unmap": true, 00:20:05.605 "flush": false, 00:20:05.605 "reset": true, 00:20:05.605 "nvme_admin": false, 00:20:05.605 "nvme_io": false, 00:20:05.605 "nvme_io_md": false, 00:20:05.605 "write_zeroes": true, 00:20:05.605 "zcopy": false, 00:20:05.605 "get_zone_info": false, 00:20:05.605 "zone_management": false, 00:20:05.605 "zone_append": false, 00:20:05.605 "compare": false, 00:20:05.605 "compare_and_write": false, 00:20:05.605 "abort": false, 00:20:05.605 "seek_hole": true, 00:20:05.605 "seek_data": true, 00:20:05.605 "copy": false, 00:20:05.605 "nvme_iov_md": false 00:20:05.605 }, 00:20:05.605 "driver_specific": { 00:20:05.605 "lvol": { 00:20:05.605 "lvol_store_uuid": "a27919f1-ccb0-4459-bc69-8cd3d7e88d38", 00:20:05.605 "base_bdev": "nvme0n1", 00:20:05.605 "thin_provision": true, 00:20:05.605 "num_allocated_clusters": 0, 00:20:05.605 "snapshot": false, 00:20:05.605 "clone": false, 00:20:05.605 "esnap_clone": false 00:20:05.605 } 00:20:05.605 } 00:20:05.605 } 00:20:05.605 ]' 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:05.605 20:47:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:05.606 20:47:22 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:05.606 20:47:22 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a168c293-bbf0-42a6-9e11-154612fafb02 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:05.868 [2024-12-06 20:47:22.765933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.765997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:05.868 [2024-12-06 20:47:22.766198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:05.868 [2024-12-06 20:47:22.766209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.769517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.769569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.868 [2024-12-06 20:47:22.769584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:20:05.868 [2024-12-06 20:47:22.769592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.769751] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:05.868 [2024-12-06 20:47:22.770538] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:05.868 [2024-12-06 20:47:22.770575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.770584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.868 [2024-12-06 20:47:22.770596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:20:05.868 [2024-12-06 20:47:22.770605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.770715] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:05.868 [2024-12-06 20:47:22.772615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.772665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:05.868 [2024-12-06 20:47:22.772676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:05.868 [2024-12-06 20:47:22.772687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.782096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.782142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.868 [2024-12-06 20:47:22.782155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.316 ms 00:20:05.868 [2024-12-06 20:47:22.782165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.782333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.782348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.868 [2024-12-06 20:47:22.782357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:05.868 [2024-12-06 20:47:22.782371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.782423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.782435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:05.868 [2024-12-06 20:47:22.782443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:05.868 [2024-12-06 20:47:22.782455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.782489] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:05.868 [2024-12-06 20:47:22.786879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.786936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.868 [2024-12-06 20:47:22.786950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.392 ms 00:20:05.868 [2024-12-06 20:47:22.786958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.787046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.787073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:05.868 [2024-12-06 20:47:22.787085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:05.868 [2024-12-06 20:47:22.787092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.787130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:05.868 [2024-12-06 20:47:22.787281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:05.868 [2024-12-06 20:47:22.787300] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:05.868 [2024-12-06 20:47:22.787312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:05.868 [2024-12-06 20:47:22.787325] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:05.868 [2024-12-06 20:47:22.787335] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:05.868 [2024-12-06 20:47:22.787346] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:05.868 [2024-12-06 20:47:22.787354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:05.868 [2024-12-06 20:47:22.787367] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:05.868 [2024-12-06 20:47:22.787377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:05.868 [2024-12-06 20:47:22.787388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.787397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:05.868 [2024-12-06 20:47:22.787407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:05.868 [2024-12-06 20:47:22.787414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.868 [2024-12-06 20:47:22.787523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.868 [2024-12-06 20:47:22.787538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:05.868 [2024-12-06 20:47:22.787550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:05.869 [2024-12-06 20:47:22.787557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.869 [2024-12-06 20:47:22.787686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:05.869 [2024-12-06 20:47:22.787697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:05.869 [2024-12-06 20:47:22.787707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:05.869 [2024-12-06 20:47:22.787733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:05.869 [2024-12-06 20:47:22.787757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.869 [2024-12-06 20:47:22.787772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:05.869 [2024-12-06 20:47:22.787779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:05.869 [2024-12-06 20:47:22.787789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.869 [2024-12-06 20:47:22.787795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:05.869 [2024-12-06 20:47:22.787804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:05.869 [2024-12-06 20:47:22.787811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:05.869 [2024-12-06 20:47:22.787829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:05.869 [2024-12-06 20:47:22.787854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:05.869 [2024-12-06 20:47:22.787880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:05.869 [2024-12-06 20:47:22.787928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:05.869 [2024-12-06 20:47:22.787952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.869 [2024-12-06 20:47:22.787969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:05.869 [2024-12-06 20:47:22.787981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:05.869 [2024-12-06 20:47:22.787989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.869 [2024-12-06 20:47:22.787997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:05.869 [2024-12-06 20:47:22.788004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:05.869 [2024-12-06 20:47:22.788014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.869 [2024-12-06 20:47:22.788021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:05.869 [2024-12-06 20:47:22.788031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:05.869 [2024-12-06 20:47:22.788038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.788047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:05.869 [2024-12-06 20:47:22.788054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:05.869 [2024-12-06 20:47:22.788063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.788069] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:05.869 [2024-12-06 20:47:22.788079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:05.869 [2024-12-06 20:47:22.788087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.869 [2024-12-06 20:47:22.788097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.869 [2024-12-06 20:47:22.788104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:05.869 [2024-12-06 20:47:22.788115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:05.869 [2024-12-06 20:47:22.788122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:05.869 [2024-12-06 20:47:22.788132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:05.869 [2024-12-06 20:47:22.788153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:05.869 [2024-12-06 20:47:22.788163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:05.869 [2024-12-06 20:47:22.788172] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:05.869 [2024-12-06 20:47:22.788183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:05.869 [2024-12-06 20:47:22.788205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:05.869 [2024-12-06 20:47:22.788212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:05.869 [2024-12-06 20:47:22.788222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:05.869 [2024-12-06 20:47:22.788230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:05.869 [2024-12-06 20:47:22.788251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:05.869 [2024-12-06 20:47:22.788258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:05.869 [2024-12-06 20:47:22.788268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:05.869 [2024-12-06 20:47:22.788276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:05.869 [2024-12-06 20:47:22.788291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:05.869 [2024-12-06 20:47:22.788333] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:05.869 [2024-12-06 20:47:22.788350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:05.869 [2024-12-06 20:47:22.788368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:05.869 [2024-12-06 20:47:22.788376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:05.869 [2024-12-06 20:47:22.788385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:05.869 [2024-12-06 20:47:22.788394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.869 [2024-12-06 20:47:22.788404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:05.869 [2024-12-06 20:47:22.788412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:20:05.869 [2024-12-06 20:47:22.788422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.869 [2024-12-06 20:47:22.788507] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:05.869 [2024-12-06 20:47:22.788522] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:08.410 [2024-12-06 20:47:25.061769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.061827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:08.410 [2024-12-06 20:47:25.061842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2273.254 ms 00:20:08.410 [2024-12-06 20:47:25.061852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.087003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.087051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.410 [2024-12-06 20:47:25.087065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.891 ms 00:20:08.410 [2024-12-06 20:47:25.087074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.087205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.087218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:08.410 [2024-12-06 20:47:25.087240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:08.410 [2024-12-06 20:47:25.087252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.129350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.129391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.410 [2024-12-06 20:47:25.129403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.069 ms 00:20:08.410 [2024-12-06 20:47:25.129414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.129507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.129520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.410 [2024-12-06 20:47:25.129529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:08.410 [2024-12-06 20:47:25.129538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.129843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.129862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.410 [2024-12-06 20:47:25.129871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:08.410 [2024-12-06 20:47:25.129880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.130016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.130029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.410 [2024-12-06 20:47:25.130049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:08.410 [2024-12-06 20:47:25.130059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.410 [2024-12-06 20:47:25.144493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.410 [2024-12-06 20:47:25.144527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.410 [2024-12-06 20:47:25.144537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.409 ms 00:20:08.411 [2024-12-06 20:47:25.144547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.155731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:08.411 [2024-12-06 20:47:25.169877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.169919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:08.411 [2024-12-06 20:47:25.169931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.237 ms 00:20:08.411 [2024-12-06 20:47:25.169940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.228612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.228779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:08.411 [2024-12-06 20:47:25.228801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.606 ms 00:20:08.411 [2024-12-06 20:47:25.228810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.229026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.229038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:08.411 [2024-12-06 20:47:25.229051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:20:08.411 [2024-12-06 20:47:25.229058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.252291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.252325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:08.411 [2024-12-06 20:47:25.252338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:20:08.411 [2024-12-06 20:47:25.252346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.275071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.275257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:08.411 [2024-12-06 20:47:25.275276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.666 ms 00:20:08.411 [2024-12-06 20:47:25.275283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.277153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.277249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:08.411 [2024-12-06 20:47:25.277284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:20:08.411 [2024-12-06 20:47:25.277307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.346369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.346528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:08.411 [2024-12-06 20:47:25.346553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.938 ms 00:20:08.411 [2024-12-06 20:47:25.346562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.370391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.370426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:08.411 [2024-12-06 20:47:25.370440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.739 ms 00:20:08.411 [2024-12-06 20:47:25.370447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.393661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.393789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:08.411 [2024-12-06 20:47:25.393808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.158 ms 00:20:08.411 [2024-12-06 20:47:25.393816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.416881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.416930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:08.411 [2024-12-06 20:47:25.416942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.003 ms 00:20:08.411 [2024-12-06 20:47:25.416949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.417009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.417021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:08.411 [2024-12-06 20:47:25.417034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.411 [2024-12-06 20:47:25.417041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.417115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.411 [2024-12-06 20:47:25.417124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:08.411 [2024-12-06 20:47:25.417133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:08.411 [2024-12-06 20:47:25.417140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.411 [2024-12-06 20:47:25.417996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.411 [2024-12-06 20:47:25.420996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2651.813 ms, result 0 00:20:08.411 [2024-12-06 20:47:25.421731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.411 { 00:20:08.411 "name": "ftl0", 00:20:08.411 "uuid": "ed6b6440-c21e-40a5-a295-d460d8302bed" 00:20:08.411 } 00:20:08.411 20:47:25 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:08.411 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:08.411 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:08.411 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:08.411 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:08.411 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:08.411 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:08.669 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:08.934 [ 00:20:08.934 { 00:20:08.934 "name": "ftl0", 00:20:08.934 "aliases": [ 00:20:08.934 "ed6b6440-c21e-40a5-a295-d460d8302bed" 00:20:08.934 ], 00:20:08.934 "product_name": "FTL disk", 00:20:08.934 "block_size": 4096, 00:20:08.934 "num_blocks": 23592960, 00:20:08.934 "uuid": "ed6b6440-c21e-40a5-a295-d460d8302bed", 00:20:08.934 "assigned_rate_limits": { 00:20:08.934 "rw_ios_per_sec": 0, 00:20:08.934 "rw_mbytes_per_sec": 0, 00:20:08.934 "r_mbytes_per_sec": 0, 00:20:08.934 "w_mbytes_per_sec": 0 00:20:08.934 }, 00:20:08.934 "claimed": false, 00:20:08.934 "zoned": false, 00:20:08.934 "supported_io_types": { 00:20:08.934 "read": true, 00:20:08.934 "write": true, 00:20:08.934 "unmap": true, 00:20:08.934 "flush": true, 00:20:08.934 "reset": false, 00:20:08.934 "nvme_admin": false, 00:20:08.934 "nvme_io": false, 00:20:08.934 "nvme_io_md": false, 00:20:08.934 "write_zeroes": true, 00:20:08.934 "zcopy": false, 00:20:08.934 "get_zone_info": false, 00:20:08.934 "zone_management": false, 00:20:08.934 "zone_append": false, 00:20:08.934 "compare": false, 00:20:08.934 "compare_and_write": false, 00:20:08.934 "abort": false, 00:20:08.934 "seek_hole": false, 00:20:08.934 "seek_data": false, 00:20:08.934 "copy": false, 00:20:08.934 "nvme_iov_md": false 00:20:08.934 }, 00:20:08.934 "driver_specific": { 00:20:08.934 "ftl": { 00:20:08.934 "base_bdev": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:08.935 "cache": "nvc0n1p0" 00:20:08.935 } 00:20:08.935 } 00:20:08.935 } 00:20:08.935 ] 00:20:08.935 20:47:25 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:08.935 20:47:25 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:08.935 20:47:25 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:08.935 20:47:26 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:08.935 20:47:26 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:09.215 20:47:26 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:09.215 { 00:20:09.215 "name": "ftl0", 00:20:09.215 "aliases": [ 00:20:09.215 "ed6b6440-c21e-40a5-a295-d460d8302bed" 00:20:09.215 ], 00:20:09.215 "product_name": "FTL disk", 00:20:09.215 "block_size": 4096, 00:20:09.215 "num_blocks": 23592960, 00:20:09.215 "uuid": "ed6b6440-c21e-40a5-a295-d460d8302bed", 00:20:09.215 "assigned_rate_limits": { 00:20:09.215 "rw_ios_per_sec": 0, 00:20:09.215 "rw_mbytes_per_sec": 0, 00:20:09.215 "r_mbytes_per_sec": 0, 00:20:09.215 "w_mbytes_per_sec": 0 00:20:09.215 }, 00:20:09.215 "claimed": false, 00:20:09.215 "zoned": false, 00:20:09.215 "supported_io_types": { 00:20:09.215 "read": true, 00:20:09.215 "write": true, 00:20:09.215 "unmap": true, 00:20:09.215 "flush": true, 00:20:09.215 "reset": false, 00:20:09.215 "nvme_admin": false, 00:20:09.215 "nvme_io": false, 00:20:09.215 "nvme_io_md": false, 00:20:09.215 "write_zeroes": true, 00:20:09.215 "zcopy": false, 00:20:09.215 "get_zone_info": false, 00:20:09.215 "zone_management": false, 00:20:09.215 "zone_append": false, 00:20:09.215 "compare": false, 00:20:09.215 "compare_and_write": false, 00:20:09.215 "abort": false, 00:20:09.215 "seek_hole": false, 00:20:09.215 "seek_data": false, 00:20:09.215 "copy": false, 00:20:09.215 "nvme_iov_md": false 00:20:09.215 }, 00:20:09.215 "driver_specific": { 00:20:09.215 "ftl": { 00:20:09.215 "base_bdev": "a168c293-bbf0-42a6-9e11-154612fafb02", 00:20:09.215 "cache": "nvc0n1p0" 00:20:09.215 } 00:20:09.215 } 00:20:09.215 } 00:20:09.215 ]' 00:20:09.215 20:47:26 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:09.215 20:47:26 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:09.215 20:47:26 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:09.486 [2024-12-06 20:47:26.449071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.449117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:09.486 [2024-12-06 20:47:26.449132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:09.486 [2024-12-06 20:47:26.449144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.449175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:09.486 [2024-12-06 20:47:26.451749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.451776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:09.486 [2024-12-06 20:47:26.451791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:20:09.486 [2024-12-06 20:47:26.451799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.452282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.452298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:09.486 [2024-12-06 20:47:26.452310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:20:09.486 [2024-12-06 20:47:26.452317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.455948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.455970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:09.486 [2024-12-06 20:47:26.455982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:20:09.486 [2024-12-06 20:47:26.455990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.463077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.463202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:09.486 [2024-12-06 20:47:26.463221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.047 ms 00:20:09.486 [2024-12-06 20:47:26.463229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.486682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.486798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:09.486 [2024-12-06 20:47:26.486820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.374 ms 00:20:09.486 [2024-12-06 20:47:26.486827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.501356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.501469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:09.486 [2024-12-06 20:47:26.501488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.476 ms 00:20:09.486 [2024-12-06 20:47:26.501498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.501680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.501690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:09.486 [2024-12-06 20:47:26.501700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:09.486 [2024-12-06 20:47:26.501707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.524619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.524725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:09.486 [2024-12-06 20:47:26.524743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.884 ms 00:20:09.486 [2024-12-06 20:47:26.524750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.547410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.547514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:09.486 [2024-12-06 20:47:26.547533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.607 ms 00:20:09.486 [2024-12-06 20:47:26.547540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.570114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.570218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:09.486 [2024-12-06 20:47:26.570235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.517 ms 00:20:09.486 [2024-12-06 20:47:26.570242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.486 [2024-12-06 20:47:26.592881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.486 [2024-12-06 20:47:26.592918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:09.486 [2024-12-06 20:47:26.592930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.545 ms 00:20:09.486 [2024-12-06 20:47:26.592937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.487 [2024-12-06 20:47:26.592990] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:09.487 [2024-12-06 20:47:26.593004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:09.487 [2024-12-06 20:47:26.593869] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:09.487 [2024-12-06 20:47:26.593880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:09.487 [2024-12-06 20:47:26.594135] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:09.487 [2024-12-06 20:47:26.594169] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:09.487 [2024-12-06 20:47:26.594189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:09.487 [2024-12-06 20:47:26.594212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:09.487 [2024-12-06 20:47:26.594231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:09.487 [2024-12-06 20:47:26.594251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:09.487 [2024-12-06 20:47:26.594270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:09.487 [2024-12-06 20:47:26.594289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:09.487 [2024-12-06 20:47:26.594307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:09.487 [2024-12-06 20:47:26.594328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.487 [2024-12-06 20:47:26.594405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:09.487 [2024-12-06 20:47:26.594432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:20:09.487 [2024-12-06 20:47:26.594451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.487 [2024-12-06 20:47:26.606701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.487 [2024-12-06 20:47:26.606804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:09.487 [2024-12-06 20:47:26.606858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:20:09.487 [2024-12-06 20:47:26.606880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.487 [2024-12-06 20:47:26.607314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.487 [2024-12-06 20:47:26.607397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:09.487 [2024-12-06 20:47:26.607446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:09.487 [2024-12-06 20:47:26.607469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.650676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.650799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.746 [2024-12-06 20:47:26.650850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.650873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.650992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.651019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.746 [2024-12-06 20:47:26.651040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.651058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.651131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.651208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.746 [2024-12-06 20:47:26.651238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.651257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.651300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.651321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.746 [2024-12-06 20:47:26.651342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.651361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.731119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.731161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.746 [2024-12-06 20:47:26.731174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.731183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.792488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.792531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.746 [2024-12-06 20:47:26.792543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.792551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.792645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.792656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.746 [2024-12-06 20:47:26.792668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.792677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.792731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.792739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.746 [2024-12-06 20:47:26.792748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.792755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.792858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.792868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.746 [2024-12-06 20:47:26.792877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.792886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.792966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.792976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:09.746 [2024-12-06 20:47:26.792985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.792992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.793037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.793046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.746 [2024-12-06 20:47:26.793056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.793064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.793115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.746 [2024-12-06 20:47:26.793124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.746 [2024-12-06 20:47:26.793133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.746 [2024-12-06 20:47:26.793140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.746 [2024-12-06 20:47:26.793302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.210 ms, result 0 00:20:09.746 true 00:20:09.746 20:47:26 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76289 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76289 ']' 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76289 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76289 00:20:09.746 killing process with pid 76289 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76289' 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76289 00:20:09.746 20:47:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76289 00:20:16.321 20:47:33 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:17.262 65536+0 records in 00:20:17.262 65536+0 records out 00:20:17.262 268435456 bytes (268 MB, 256 MiB) copied, 1.09717 s, 245 MB/s 00:20:17.262 20:47:34 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.262 [2024-12-06 20:47:34.298466] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:17.262 [2024-12-06 20:47:34.298613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76472 ] 00:20:17.523 [2024-12-06 20:47:34.462496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.523 [2024-12-06 20:47:34.580869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.783 [2024-12-06 20:47:34.876973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.783 [2024-12-06 20:47:34.877281] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.044 [2024-12-06 20:47:35.038326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.044 [2024-12-06 20:47:35.038558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.044 [2024-12-06 20:47:35.038584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:18.044 [2024-12-06 20:47:35.038594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.044 [2024-12-06 20:47:35.041588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.044 [2024-12-06 20:47:35.041779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.044 [2024-12-06 20:47:35.041801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.966 ms 00:20:18.044 [2024-12-06 20:47:35.041809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.044 [2024-12-06 20:47:35.042366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.044 [2024-12-06 20:47:35.043205] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.044 [2024-12-06 20:47:35.043247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.044 [2024-12-06 20:47:35.043256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.044 [2024-12-06 20:47:35.043268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:20:18.044 [2024-12-06 20:47:35.043276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.044 [2024-12-06 20:47:35.045094] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.044 [2024-12-06 20:47:35.059346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.044 [2024-12-06 20:47:35.059396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.044 [2024-12-06 20:47:35.059410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.255 ms 00:20:18.044 [2024-12-06 20:47:35.059418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.044 [2024-12-06 20:47:35.059544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.044 [2024-12-06 20:47:35.059557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.044 [2024-12-06 20:47:35.059567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:18.044 [2024-12-06 20:47:35.059575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.044 [2024-12-06 20:47:35.067765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.044 [2024-12-06 20:47:35.067987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.045 [2024-12-06 20:47:35.068007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.143 ms 00:20:18.045 [2024-12-06 20:47:35.068015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.068131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-06 20:47:35.068143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.045 [2024-12-06 20:47:35.068152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:18.045 [2024-12-06 20:47:35.068160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.068190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-06 20:47:35.068199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.045 [2024-12-06 20:47:35.068207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:18.045 [2024-12-06 20:47:35.068215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.068263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.045 [2024-12-06 20:47:35.072168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-06 20:47:35.072207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.045 [2024-12-06 20:47:35.072218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:20:18.045 [2024-12-06 20:47:35.072226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.072318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-06 20:47:35.072329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.045 [2024-12-06 20:47:35.072338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:18.045 [2024-12-06 20:47:35.072346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.072371] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.045 [2024-12-06 20:47:35.072393] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:18.045 [2024-12-06 20:47:35.072430] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.045 [2024-12-06 20:47:35.072446] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:18.045 [2024-12-06 20:47:35.072552] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:18.045 [2024-12-06 20:47:35.072563] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.045 [2024-12-06 20:47:35.072574] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:18.045 [2024-12-06 20:47:35.072587] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.045 [2024-12-06 20:47:35.072597] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.045 [2024-12-06 20:47:35.072605] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.045 [2024-12-06 20:47:35.072613] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.045 [2024-12-06 20:47:35.072621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:18.045 [2024-12-06 20:47:35.072629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:18.045 [2024-12-06 20:47:35.072638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-06 20:47:35.072646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.045 [2024-12-06 20:47:35.072654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:18.045 [2024-12-06 20:47:35.072661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.072748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.045 [2024-12-06 20:47:35.072760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.045 [2024-12-06 20:47:35.072768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:18.045 [2024-12-06 20:47:35.072776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.045 [2024-12-06 20:47:35.072879] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.045 [2024-12-06 20:47:35.072918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.045 [2024-12-06 20:47:35.072928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.045 [2024-12-06 20:47:35.072935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.072943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.045 [2024-12-06 20:47:35.072951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.072958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.045 [2024-12-06 20:47:35.072966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.045 [2024-12-06 20:47:35.072973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.045 [2024-12-06 20:47:35.072980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.045 [2024-12-06 20:47:35.072987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.045 [2024-12-06 20:47:35.073008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.045 [2024-12-06 20:47:35.073015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.045 [2024-12-06 20:47:35.073021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.045 [2024-12-06 20:47:35.073029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:18.045 [2024-12-06 20:47:35.073039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.045 [2024-12-06 20:47:35.073053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.045 [2024-12-06 20:47:35.073074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.045 [2024-12-06 20:47:35.073095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.045 [2024-12-06 20:47:35.073117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.045 [2024-12-06 20:47:35.073136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.045 [2024-12-06 20:47:35.073156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.045 [2024-12-06 20:47:35.073170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.045 [2024-12-06 20:47:35.073176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:18.045 [2024-12-06 20:47:35.073183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.045 [2024-12-06 20:47:35.073189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:18.045 [2024-12-06 20:47:35.073196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:18.045 [2024-12-06 20:47:35.073203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:18.045 [2024-12-06 20:47:35.073217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:18.045 [2024-12-06 20:47:35.073225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073231] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.045 [2024-12-06 20:47:35.073246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.045 [2024-12-06 20:47:35.073256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.045 [2024-12-06 20:47:35.073273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.045 [2024-12-06 20:47:35.073281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.045 [2024-12-06 20:47:35.073287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.045 [2024-12-06 20:47:35.073294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.045 [2024-12-06 20:47:35.073301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.045 [2024-12-06 20:47:35.073309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.045 [2024-12-06 20:47:35.073318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.045 [2024-12-06 20:47:35.073328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.045 [2024-12-06 20:47:35.073336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.045 [2024-12-06 20:47:35.073344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:18.045 [2024-12-06 20:47:35.073352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.045 [2024-12-06 20:47:35.073360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:18.045 [2024-12-06 20:47:35.073367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:18.045 [2024-12-06 20:47:35.073374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:18.045 [2024-12-06 20:47:35.073381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:18.046 [2024-12-06 20:47:35.073388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:18.046 [2024-12-06 20:47:35.073396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:18.046 [2024-12-06 20:47:35.073404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:18.046 [2024-12-06 20:47:35.073411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:18.046 [2024-12-06 20:47:35.073419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:18.046 [2024-12-06 20:47:35.073426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:18.046 [2024-12-06 20:47:35.073433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:18.046 [2024-12-06 20:47:35.073441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.046 [2024-12-06 20:47:35.073449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.046 [2024-12-06 20:47:35.073458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.046 [2024-12-06 20:47:35.073466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.046 [2024-12-06 20:47:35.073473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.046 [2024-12-06 20:47:35.073480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.046 [2024-12-06 20:47:35.073488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.073499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.046 [2024-12-06 20:47:35.073507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:20:18.046 [2024-12-06 20:47:35.073514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.105029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.105077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.046 [2024-12-06 20:47:35.105088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.458 ms 00:20:18.046 [2024-12-06 20:47:35.105097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.105234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.105245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.046 [2024-12-06 20:47:35.105255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:18.046 [2024-12-06 20:47:35.105263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.154874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.154942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.046 [2024-12-06 20:47:35.154960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.588 ms 00:20:18.046 [2024-12-06 20:47:35.154968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.155081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.155095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.046 [2024-12-06 20:47:35.155105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:18.046 [2024-12-06 20:47:35.155113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.155655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.155677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.046 [2024-12-06 20:47:35.155696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:20:18.046 [2024-12-06 20:47:35.155704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.155846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.155857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.046 [2024-12-06 20:47:35.155865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:18.046 [2024-12-06 20:47:35.155873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.046 [2024-12-06 20:47:35.172356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.046 [2024-12-06 20:47:35.172400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.046 [2024-12-06 20:47:35.172411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.424 ms 00:20:18.046 [2024-12-06 20:47:35.172420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.305 [2024-12-06 20:47:35.186979] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:18.305 [2024-12-06 20:47:35.187195] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.305 [2024-12-06 20:47:35.187216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.305 [2024-12-06 20:47:35.187225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.305 [2024-12-06 20:47:35.187235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.684 ms 00:20:18.305 [2024-12-06 20:47:35.187242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.305 [2024-12-06 20:47:35.221173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.305 [2024-12-06 20:47:35.221247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.305 [2024-12-06 20:47:35.221263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.396 ms 00:20:18.305 [2024-12-06 20:47:35.221272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.305 [2024-12-06 20:47:35.234463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.305 [2024-12-06 20:47:35.234680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.305 [2024-12-06 20:47:35.234704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.081 ms 00:20:18.306 [2024-12-06 20:47:35.234712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.255682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.255736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.306 [2024-12-06 20:47:35.255751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.390 ms 00:20:18.306 [2024-12-06 20:47:35.255760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.256491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.256519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.306 [2024-12-06 20:47:35.256532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:18.306 [2024-12-06 20:47:35.256541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.321516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.321776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.306 [2024-12-06 20:47:35.321802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.948 ms 00:20:18.306 [2024-12-06 20:47:35.321812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.333287] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.306 [2024-12-06 20:47:35.352444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.352659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.306 [2024-12-06 20:47:35.352682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.404 ms 00:20:18.306 [2024-12-06 20:47:35.352691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.352797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.352810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.306 [2024-12-06 20:47:35.352820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:18.306 [2024-12-06 20:47:35.352829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.352924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.352935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.306 [2024-12-06 20:47:35.352944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:18.306 [2024-12-06 20:47:35.352953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.352991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.353003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.306 [2024-12-06 20:47:35.353011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:18.306 [2024-12-06 20:47:35.353019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.353056] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.306 [2024-12-06 20:47:35.353067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.353076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.306 [2024-12-06 20:47:35.353085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:18.306 [2024-12-06 20:47:35.353093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.378765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.378815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.306 [2024-12-06 20:47:35.378829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.647 ms 00:20:18.306 [2024-12-06 20:47:35.378838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.378969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.306 [2024-12-06 20:47:35.378983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.306 [2024-12-06 20:47:35.378992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:18.306 [2024-12-06 20:47:35.379000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.306 [2024-12-06 20:47:35.380146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.306 [2024-12-06 20:47:35.383400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.474 ms, result 0 00:20:18.306 [2024-12-06 20:47:35.384480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.306 [2024-12-06 20:47:35.398043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.707  [2024-12-06T20:47:37.768Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T20:47:38.699Z] Copying: 65/256 [MB] (41 MBps) [2024-12-06T20:47:39.631Z] Copying: 97/256 [MB] (31 MBps) [2024-12-06T20:47:40.564Z] Copying: 130/256 [MB] (33 MBps) [2024-12-06T20:47:41.494Z] Copying: 153/256 [MB] (22 MBps) [2024-12-06T20:47:42.426Z] Copying: 166/256 [MB] (13 MBps) [2024-12-06T20:47:43.799Z] Copying: 177/256 [MB] (11 MBps) [2024-12-06T20:47:44.732Z] Copying: 189/256 [MB] (11 MBps) [2024-12-06T20:47:45.664Z] Copying: 200/256 [MB] (11 MBps) [2024-12-06T20:47:46.595Z] Copying: 214/256 [MB] (13 MBps) [2024-12-06T20:47:47.529Z] Copying: 227/256 [MB] (13 MBps) [2024-12-06T20:47:48.464Z] Copying: 243/256 [MB] (16 MBps) [2024-12-06T20:47:48.464Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-06 20:47:48.265932] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.331 [2024-12-06 20:47:48.275112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.275148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.331 [2024-12-06 20:47:48.275160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.331 [2024-12-06 20:47:48.275174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.275195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.331 [2024-12-06 20:47:48.277798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.277826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.331 [2024-12-06 20:47:48.277837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:20:31.331 [2024-12-06 20:47:48.277845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.280021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.280137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.331 [2024-12-06 20:47:48.280153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:20:31.331 [2024-12-06 20:47:48.280161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.287561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.287680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.331 [2024-12-06 20:47:48.287694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.382 ms 00:20:31.331 [2024-12-06 20:47:48.287702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.294668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.294772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.331 [2024-12-06 20:47:48.294785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.935 ms 00:20:31.331 [2024-12-06 20:47:48.294793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.318049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.318080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.331 [2024-12-06 20:47:48.318090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.204 ms 00:20:31.331 [2024-12-06 20:47:48.318097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.332060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.332096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.331 [2024-12-06 20:47:48.332110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.929 ms 00:20:31.331 [2024-12-06 20:47:48.332117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.332256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.332265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.331 [2024-12-06 20:47:48.332274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:31.331 [2024-12-06 20:47:48.332289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.355943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.355973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.331 [2024-12-06 20:47:48.355983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.638 ms 00:20:31.331 [2024-12-06 20:47:48.355990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.379188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.379218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.331 [2024-12-06 20:47:48.379227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.166 ms 00:20:31.331 [2024-12-06 20:47:48.379235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.331 [2024-12-06 20:47:48.402307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.331 [2024-12-06 20:47:48.402336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.332 [2024-12-06 20:47:48.402345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.040 ms 00:20:31.332 [2024-12-06 20:47:48.402352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.332 [2024-12-06 20:47:48.425477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.332 [2024-12-06 20:47:48.425595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.332 [2024-12-06 20:47:48.425610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.064 ms 00:20:31.332 [2024-12-06 20:47:48.425617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.332 [2024-12-06 20:47:48.425646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.332 [2024-12-06 20:47:48.425660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.425993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.332 [2024-12-06 20:47:48.426280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.333 [2024-12-06 20:47:48.426419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.333 [2024-12-06 20:47:48.426426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:31.333 [2024-12-06 20:47:48.426434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.333 [2024-12-06 20:47:48.426441] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.333 [2024-12-06 20:47:48.426447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.333 [2024-12-06 20:47:48.426455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.333 [2024-12-06 20:47:48.426462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.333 [2024-12-06 20:47:48.426470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.333 [2024-12-06 20:47:48.426477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.333 [2024-12-06 20:47:48.426483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.333 [2024-12-06 20:47:48.426489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.333 [2024-12-06 20:47:48.426496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.333 [2024-12-06 20:47:48.426505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.333 [2024-12-06 20:47:48.426513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:20:31.333 [2024-12-06 20:47:48.426521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.333 [2024-12-06 20:47:48.439091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.333 [2024-12-06 20:47:48.439119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.333 [2024-12-06 20:47:48.439128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.540 ms 00:20:31.333 [2024-12-06 20:47:48.439135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.333 [2024-12-06 20:47:48.439488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.333 [2024-12-06 20:47:48.439497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.333 [2024-12-06 20:47:48.439505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:31.333 [2024-12-06 20:47:48.439512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.474554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.474587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.591 [2024-12-06 20:47:48.474597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.591 [2024-12-06 20:47:48.474605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.474687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.474696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.591 [2024-12-06 20:47:48.474703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.591 [2024-12-06 20:47:48.474710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.474751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.474759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.591 [2024-12-06 20:47:48.474767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.591 [2024-12-06 20:47:48.474774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.474790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.474800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.591 [2024-12-06 20:47:48.474808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.591 [2024-12-06 20:47:48.474814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.552695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.552856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.591 [2024-12-06 20:47:48.552873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.591 [2024-12-06 20:47:48.552881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.616330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.616368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.591 [2024-12-06 20:47:48.616379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.591 [2024-12-06 20:47:48.616388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.591 [2024-12-06 20:47:48.616452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.591 [2024-12-06 20:47:48.616460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.592 [2024-12-06 20:47:48.616468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.592 [2024-12-06 20:47:48.616476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.592 [2024-12-06 20:47:48.616503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.592 [2024-12-06 20:47:48.616511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.592 [2024-12-06 20:47:48.616521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.592 [2024-12-06 20:47:48.616528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.592 [2024-12-06 20:47:48.616612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.592 [2024-12-06 20:47:48.616622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.592 [2024-12-06 20:47:48.616630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.592 [2024-12-06 20:47:48.616637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.592 [2024-12-06 20:47:48.616667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.592 [2024-12-06 20:47:48.616675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.592 [2024-12-06 20:47:48.616683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.592 [2024-12-06 20:47:48.616692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.592 [2024-12-06 20:47:48.616728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.592 [2024-12-06 20:47:48.616736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.592 [2024-12-06 20:47:48.616744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.592 [2024-12-06 20:47:48.616751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.592 [2024-12-06 20:47:48.616792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.592 [2024-12-06 20:47:48.616801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.592 [2024-12-06 20:47:48.616812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.592 [2024-12-06 20:47:48.616819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.592 [2024-12-06 20:47:48.616970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.837 ms, result 0 00:20:32.526 00:20:32.526 00:20:32.526 20:47:49 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76634 00:20:32.526 20:47:49 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:32.526 20:47:49 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76634 00:20:32.526 20:47:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76634 ']' 00:20:32.526 20:47:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.526 20:47:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.526 20:47:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.526 20:47:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.526 20:47:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:32.526 [2024-12-06 20:47:49.403377] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:32.526 [2024-12-06 20:47:49.403512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76634 ] 00:20:32.526 [2024-12-06 20:47:49.556225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.526 [2024-12-06 20:47:49.651319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.460 20:47:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.460 20:47:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:33.460 20:47:50 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:33.460 [2024-12-06 20:47:50.445295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.460 [2024-12-06 20:47:50.445356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.720 [2024-12-06 20:47:50.599979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.600027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.720 [2024-12-06 20:47:50.600042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.720 [2024-12-06 20:47:50.600050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.602678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.602714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.720 [2024-12-06 20:47:50.602726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:20:33.720 [2024-12-06 20:47:50.602733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.602813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.720 [2024-12-06 20:47:50.603487] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.720 [2024-12-06 20:47:50.603509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.603517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.720 [2024-12-06 20:47:50.603527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:20:33.720 [2024-12-06 20:47:50.603535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.604992] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:33.720 [2024-12-06 20:47:50.617732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.617771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:33.720 [2024-12-06 20:47:50.617784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.745 ms 00:20:33.720 [2024-12-06 20:47:50.617794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.617879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.617909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:33.720 [2024-12-06 20:47:50.617918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:33.720 [2024-12-06 20:47:50.617927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.622873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.622925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.720 [2024-12-06 20:47:50.622935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.897 ms 00:20:33.720 [2024-12-06 20:47:50.622944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.623039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.623050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.720 [2024-12-06 20:47:50.623058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:33.720 [2024-12-06 20:47:50.623070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.623093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.623103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.720 [2024-12-06 20:47:50.623110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:33.720 [2024-12-06 20:47:50.623118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.623141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:33.720 [2024-12-06 20:47:50.626447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.626475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.720 [2024-12-06 20:47:50.626486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:20:33.720 [2024-12-06 20:47:50.626493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.626544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-12-06 20:47:50.626552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.720 [2024-12-06 20:47:50.626562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:33.720 [2024-12-06 20:47:50.626576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-12-06 20:47:50.626602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:33.721 [2024-12-06 20:47:50.626625] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:33.721 [2024-12-06 20:47:50.626671] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:33.721 [2024-12-06 20:47:50.626686] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:33.721 [2024-12-06 20:47:50.626791] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:33.721 [2024-12-06 20:47:50.626801] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.721 [2024-12-06 20:47:50.626814] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:33.721 [2024-12-06 20:47:50.626824] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.721 [2024-12-06 20:47:50.626834] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.721 [2024-12-06 20:47:50.626842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:33.721 [2024-12-06 20:47:50.626850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.721 [2024-12-06 20:47:50.626857] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:33.721 [2024-12-06 20:47:50.626868] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:33.721 [2024-12-06 20:47:50.626875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.721 [2024-12-06 20:47:50.626884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.721 [2024-12-06 20:47:50.627056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:20:33.721 [2024-12-06 20:47:50.627082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-12-06 20:47:50.627201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.721 [2024-12-06 20:47:50.627226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.721 [2024-12-06 20:47:50.627246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:33.721 [2024-12-06 20:47:50.627265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-12-06 20:47:50.627429] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.721 [2024-12-06 20:47:50.627459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.721 [2024-12-06 20:47:50.627479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.721 [2024-12-06 20:47:50.627540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.627563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.721 [2024-12-06 20:47:50.627585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.627629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:33.721 [2024-12-06 20:47:50.627655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.721 [2024-12-06 20:47:50.627674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:33.721 [2024-12-06 20:47:50.627694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.721 [2024-12-06 20:47:50.627742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.721 [2024-12-06 20:47:50.627766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:33.721 [2024-12-06 20:47:50.628469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.721 [2024-12-06 20:47:50.628481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.721 [2024-12-06 20:47:50.628490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:33.721 [2024-12-06 20:47:50.628499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.721 [2024-12-06 20:47:50.628517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.721 [2024-12-06 20:47:50.628547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.721 [2024-12-06 20:47:50.628576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.721 [2024-12-06 20:47:50.628600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.721 [2024-12-06 20:47:50.628628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.721 [2024-12-06 20:47:50.628652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.721 [2024-12-06 20:47:50.628669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.721 [2024-12-06 20:47:50.628678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:33.721 [2024-12-06 20:47:50.628685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.721 [2024-12-06 20:47:50.628694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:33.721 [2024-12-06 20:47:50.628702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:33.721 [2024-12-06 20:47:50.628712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:33.721 [2024-12-06 20:47:50.628729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:33.721 [2024-12-06 20:47:50.628737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628746] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.721 [2024-12-06 20:47:50.628756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.721 [2024-12-06 20:47:50.628767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.721 [2024-12-06 20:47:50.628785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.721 [2024-12-06 20:47:50.628793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.721 [2024-12-06 20:47:50.628803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.721 [2024-12-06 20:47:50.628811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.721 [2024-12-06 20:47:50.628820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.721 [2024-12-06 20:47:50.628827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.721 [2024-12-06 20:47:50.628838] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.721 [2024-12-06 20:47:50.628848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.628862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:33.721 [2024-12-06 20:47:50.628871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:33.721 [2024-12-06 20:47:50.628885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:33.721 [2024-12-06 20:47:50.628903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:33.721 [2024-12-06 20:47:50.628913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:33.721 [2024-12-06 20:47:50.628920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:33.721 [2024-12-06 20:47:50.628928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:33.721 [2024-12-06 20:47:50.628935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:33.721 [2024-12-06 20:47:50.628943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:33.721 [2024-12-06 20:47:50.628951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.628959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.628966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.628974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.628982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:33.721 [2024-12-06 20:47:50.628990] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.721 [2024-12-06 20:47:50.628998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.629009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.721 [2024-12-06 20:47:50.629016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.721 [2024-12-06 20:47:50.629024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.722 [2024-12-06 20:47:50.629033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.722 [2024-12-06 20:47:50.629043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.629051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.722 [2024-12-06 20:47:50.629060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:20:33.722 [2024-12-06 20:47:50.629068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.655035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.655154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.722 [2024-12-06 20:47:50.655172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.891 ms 00:20:33.722 [2024-12-06 20:47:50.655182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.655294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.655304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:33.722 [2024-12-06 20:47:50.655313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:33.722 [2024-12-06 20:47:50.655320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.685839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.685874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.722 [2024-12-06 20:47:50.685886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.496 ms 00:20:33.722 [2024-12-06 20:47:50.685908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.685962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.685971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.722 [2024-12-06 20:47:50.685982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:33.722 [2024-12-06 20:47:50.685989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.686313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.686332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.722 [2024-12-06 20:47:50.686345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:33.722 [2024-12-06 20:47:50.686352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.686476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.686485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.722 [2024-12-06 20:47:50.686494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:33.722 [2024-12-06 20:47:50.686502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.700973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.701091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.722 [2024-12-06 20:47:50.701109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.450 ms 00:20:33.722 [2024-12-06 20:47:50.701117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.732436] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:33.722 [2024-12-06 20:47:50.732474] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:33.722 [2024-12-06 20:47:50.732489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.732498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:33.722 [2024-12-06 20:47:50.732510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.265 ms 00:20:33.722 [2024-12-06 20:47:50.732522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.756880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.756921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:33.722 [2024-12-06 20:47:50.756933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.285 ms 00:20:33.722 [2024-12-06 20:47:50.756941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.768547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.768577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:33.722 [2024-12-06 20:47:50.768590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.535 ms 00:20:33.722 [2024-12-06 20:47:50.768597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.780129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.780159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:33.722 [2024-12-06 20:47:50.780171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.466 ms 00:20:33.722 [2024-12-06 20:47:50.780178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.780795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.780814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:33.722 [2024-12-06 20:47:50.780824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:20:33.722 [2024-12-06 20:47:50.780832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.836591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.722 [2024-12-06 20:47:50.836636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:33.722 [2024-12-06 20:47:50.836651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.734 ms 00:20:33.722 [2024-12-06 20:47:50.836659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.722 [2024-12-06 20:47:50.847081] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:33.981 [2024-12-06 20:47:50.861073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.861118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:33.981 [2024-12-06 20:47:50.861132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.326 ms 00:20:33.981 [2024-12-06 20:47:50.861141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.861212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.861224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:33.981 [2024-12-06 20:47:50.861232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:33.981 [2024-12-06 20:47:50.861241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.861286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.861296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:33.981 [2024-12-06 20:47:50.861303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:33.981 [2024-12-06 20:47:50.861315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.861337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.861347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:33.981 [2024-12-06 20:47:50.861355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.981 [2024-12-06 20:47:50.861366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.861397] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:33.981 [2024-12-06 20:47:50.861410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.861420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:33.981 [2024-12-06 20:47:50.861429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:33.981 [2024-12-06 20:47:50.861436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.885066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.885192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:33.981 [2024-12-06 20:47:50.885213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.604 ms 00:20:33.981 [2024-12-06 20:47:50.885221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.885303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.981 [2024-12-06 20:47:50.885314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:33.981 [2024-12-06 20:47:50.885323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:33.981 [2024-12-06 20:47:50.885333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.981 [2024-12-06 20:47:50.886094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.981 [2024-12-06 20:47:50.889101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.829 ms, result 0 00:20:33.981 [2024-12-06 20:47:50.891401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.981 Some configs were skipped because the RPC state that can call them passed over. 00:20:33.981 20:47:50 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:34.239 [2024-12-06 20:47:51.170015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.239 [2024-12-06 20:47:51.170166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:34.239 [2024-12-06 20:47:51.170186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:20:34.239 [2024-12-06 20:47:51.170196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.239 [2024-12-06 20:47:51.170244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.897 ms, result 0 00:20:34.239 true 00:20:34.239 20:47:51 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:34.240 [2024-12-06 20:47:51.370138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.240 [2024-12-06 20:47:51.370272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:34.240 [2024-12-06 20:47:51.370332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:20:34.240 [2024-12-06 20:47:51.370356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.240 [2024-12-06 20:47:51.370420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.827 ms, result 0 00:20:34.498 true 00:20:34.498 20:47:51 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76634 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76634 ']' 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76634 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76634 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.498 killing process with pid 76634 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76634' 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76634 00:20:34.498 20:47:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76634 00:20:35.067 [2024-12-06 20:47:52.092181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.092395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:35.067 [2024-12-06 20:47:52.092448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.067 [2024-12-06 20:47:52.092469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.092508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:35.067 [2024-12-06 20:47:52.094603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.094702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:35.067 [2024-12-06 20:47:52.094719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:20:35.067 [2024-12-06 20:47:52.094725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.094987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.095000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:35.067 [2024-12-06 20:47:52.095008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:35.067 [2024-12-06 20:47:52.095015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.098100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.098123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:35.067 [2024-12-06 20:47:52.098134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:20:35.067 [2024-12-06 20:47:52.098140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.103531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.103637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:35.067 [2024-12-06 20:47:52.103655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:20:35.067 [2024-12-06 20:47:52.103662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.111128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.111161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:35.067 [2024-12-06 20:47:52.111172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.406 ms 00:20:35.067 [2024-12-06 20:47:52.111177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.117693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.117722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:35.067 [2024-12-06 20:47:52.117732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.484 ms 00:20:35.067 [2024-12-06 20:47:52.117738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.117850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.117858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:35.067 [2024-12-06 20:47:52.117866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:35.067 [2024-12-06 20:47:52.117872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.125580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.125686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:35.067 [2024-12-06 20:47:52.125701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.692 ms 00:20:35.067 [2024-12-06 20:47:52.125706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.132913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.133033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:35.067 [2024-12-06 20:47:52.133084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.175 ms 00:20:35.067 [2024-12-06 20:47:52.133102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.140098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.140192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:35.067 [2024-12-06 20:47:52.140250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.951 ms 00:20:35.067 [2024-12-06 20:47:52.140268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.147319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.067 [2024-12-06 20:47:52.147406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:35.067 [2024-12-06 20:47:52.147452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.995 ms 00:20:35.067 [2024-12-06 20:47:52.147468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.067 [2024-12-06 20:47:52.147512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:35.067 [2024-12-06 20:47:52.147567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.147985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:35.067 [2024-12-06 20:47:52.148206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.148875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.149983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:35.068 [2024-12-06 20:47:52.150742] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:35.068 [2024-12-06 20:47:52.150764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:35.068 [2024-12-06 20:47:52.150789] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:35.068 [2024-12-06 20:47:52.150830] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:35.068 [2024-12-06 20:47:52.150846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:35.068 [2024-12-06 20:47:52.150862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:35.068 [2024-12-06 20:47:52.150877] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:35.068 [2024-12-06 20:47:52.150923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:35.068 [2024-12-06 20:47:52.150940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:35.068 [2024-12-06 20:47:52.150956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:35.068 [2024-12-06 20:47:52.151069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:35.068 [2024-12-06 20:47:52.151090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.068 [2024-12-06 20:47:52.151105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:35.068 [2024-12-06 20:47:52.151122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.579 ms 00:20:35.068 [2024-12-06 20:47:52.151137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.068 [2024-12-06 20:47:52.160829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.068 [2024-12-06 20:47:52.160927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:35.069 [2024-12-06 20:47:52.160970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.661 ms 00:20:35.069 [2024-12-06 20:47:52.160987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.069 [2024-12-06 20:47:52.161290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.069 [2024-12-06 20:47:52.161352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:35.069 [2024-12-06 20:47:52.161394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:35.069 [2024-12-06 20:47:52.161410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.069 [2024-12-06 20:47:52.196263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.069 [2024-12-06 20:47:52.196360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.069 [2024-12-06 20:47:52.196400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.069 [2024-12-06 20:47:52.196417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.069 [2024-12-06 20:47:52.196505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.069 [2024-12-06 20:47:52.196524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.069 [2024-12-06 20:47:52.196535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.069 [2024-12-06 20:47:52.196540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.069 [2024-12-06 20:47:52.196576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.069 [2024-12-06 20:47:52.196583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.069 [2024-12-06 20:47:52.196592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.069 [2024-12-06 20:47:52.196597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.069 [2024-12-06 20:47:52.196612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.069 [2024-12-06 20:47:52.196618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.069 [2024-12-06 20:47:52.196625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.069 [2024-12-06 20:47:52.196631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.255240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.255372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.330 [2024-12-06 20:47:52.255389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.255395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.330 [2024-12-06 20:47:52.304547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.330 [2024-12-06 20:47:52.304631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.330 [2024-12-06 20:47:52.304673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.330 [2024-12-06 20:47:52.304762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:35.330 [2024-12-06 20:47:52.304807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.330 [2024-12-06 20:47:52.304861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.304925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.330 [2024-12-06 20:47:52.304933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.330 [2024-12-06 20:47:52.304941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.330 [2024-12-06 20:47:52.304946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.330 [2024-12-06 20:47:52.305052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.853 ms, result 0 00:20:35.937 20:47:52 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:35.937 20:47:52 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.937 [2024-12-06 20:47:52.886973] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:35.937 [2024-12-06 20:47:52.887188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76688 ] 00:20:35.937 [2024-12-06 20:47:53.034697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.211 [2024-12-06 20:47:53.111124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.211 [2024-12-06 20:47:53.323453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.211 [2024-12-06 20:47:53.323506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.471 [2024-12-06 20:47:53.475419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.475455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.471 [2024-12-06 20:47:53.475465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.471 [2024-12-06 20:47:53.475471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.477596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.477628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.471 [2024-12-06 20:47:53.477635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:20:36.471 [2024-12-06 20:47:53.477641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.477700] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.471 [2024-12-06 20:47:53.478230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.471 [2024-12-06 20:47:53.478382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.478391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.471 [2024-12-06 20:47:53.478398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:20:36.471 [2024-12-06 20:47:53.478403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.479400] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.471 [2024-12-06 20:47:53.489024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.489167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.471 [2024-12-06 20:47:53.489181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.625 ms 00:20:36.471 [2024-12-06 20:47:53.489187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.489255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.489264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.471 [2024-12-06 20:47:53.489271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:36.471 [2024-12-06 20:47:53.489276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.493828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.493854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.471 [2024-12-06 20:47:53.493862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:20:36.471 [2024-12-06 20:47:53.493867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.493952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.493960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.471 [2024-12-06 20:47:53.493966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:36.471 [2024-12-06 20:47:53.493972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.493992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.493998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.471 [2024-12-06 20:47:53.494004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.471 [2024-12-06 20:47:53.494009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.494024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:36.471 [2024-12-06 20:47:53.496729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.496860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.471 [2024-12-06 20:47:53.496872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:20:36.471 [2024-12-06 20:47:53.496878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.496922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.496930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.471 [2024-12-06 20:47:53.496936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.471 [2024-12-06 20:47:53.496941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.496957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.471 [2024-12-06 20:47:53.496971] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.471 [2024-12-06 20:47:53.496997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.471 [2024-12-06 20:47:53.497009] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.471 [2024-12-06 20:47:53.497088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.471 [2024-12-06 20:47:53.497096] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.471 [2024-12-06 20:47:53.497104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.471 [2024-12-06 20:47:53.497113] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497120] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497126] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:36.471 [2024-12-06 20:47:53.497131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.471 [2024-12-06 20:47:53.497137] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.471 [2024-12-06 20:47:53.497143] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.471 [2024-12-06 20:47:53.497148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.497154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.471 [2024-12-06 20:47:53.497160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:36.471 [2024-12-06 20:47:53.497165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.497231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.471 [2024-12-06 20:47:53.497239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.471 [2024-12-06 20:47:53.497246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:36.471 [2024-12-06 20:47:53.497251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.471 [2024-12-06 20:47:53.497326] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.471 [2024-12-06 20:47:53.497333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.471 [2024-12-06 20:47:53.497339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.471 [2024-12-06 20:47:53.497356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.471 [2024-12-06 20:47:53.497373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.471 [2024-12-06 20:47:53.497383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.471 [2024-12-06 20:47:53.497394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:36.471 [2024-12-06 20:47:53.497400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.471 [2024-12-06 20:47:53.497405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.471 [2024-12-06 20:47:53.497411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:36.471 [2024-12-06 20:47:53.497416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.471 [2024-12-06 20:47:53.497426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.471 [2024-12-06 20:47:53.497441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.471 [2024-12-06 20:47:53.497456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.471 [2024-12-06 20:47:53.497471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:36.471 [2024-12-06 20:47:53.497476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.471 [2024-12-06 20:47:53.497481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.471 [2024-12-06 20:47:53.497486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:36.472 [2024-12-06 20:47:53.497491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.472 [2024-12-06 20:47:53.497496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.472 [2024-12-06 20:47:53.497502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:36.472 [2024-12-06 20:47:53.497507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.472 [2024-12-06 20:47:53.497512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.472 [2024-12-06 20:47:53.497517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:36.472 [2024-12-06 20:47:53.497521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.472 [2024-12-06 20:47:53.497527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.472 [2024-12-06 20:47:53.497532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:36.472 [2024-12-06 20:47:53.497536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.472 [2024-12-06 20:47:53.497541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.472 [2024-12-06 20:47:53.497546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:36.472 [2024-12-06 20:47:53.497551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.472 [2024-12-06 20:47:53.497557] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.472 [2024-12-06 20:47:53.497563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.472 [2024-12-06 20:47:53.497570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.472 [2024-12-06 20:47:53.497575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.472 [2024-12-06 20:47:53.497581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.472 [2024-12-06 20:47:53.497586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.472 [2024-12-06 20:47:53.497591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.472 [2024-12-06 20:47:53.497596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.472 [2024-12-06 20:47:53.497601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.472 [2024-12-06 20:47:53.497606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.472 [2024-12-06 20:47:53.497612] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.472 [2024-12-06 20:47:53.497618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:36.472 [2024-12-06 20:47:53.497630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:36.472 [2024-12-06 20:47:53.497636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:36.472 [2024-12-06 20:47:53.497641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:36.472 [2024-12-06 20:47:53.497646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:36.472 [2024-12-06 20:47:53.497652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:36.472 [2024-12-06 20:47:53.497657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:36.472 [2024-12-06 20:47:53.497663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:36.472 [2024-12-06 20:47:53.497668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:36.472 [2024-12-06 20:47:53.497674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:36.472 [2024-12-06 20:47:53.497700] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.472 [2024-12-06 20:47:53.497706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.472 [2024-12-06 20:47:53.497718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.472 [2024-12-06 20:47:53.497723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.472 [2024-12-06 20:47:53.497729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.472 [2024-12-06 20:47:53.497734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.497743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.472 [2024-12-06 20:47:53.497749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:20:36.472 [2024-12-06 20:47:53.497754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.518735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.518764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.472 [2024-12-06 20:47:53.518772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.941 ms 00:20:36.472 [2024-12-06 20:47:53.518778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.518873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.518880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.472 [2024-12-06 20:47:53.518906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:36.472 [2024-12-06 20:47:53.518913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.558756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.558871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.472 [2024-12-06 20:47:53.558903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.826 ms 00:20:36.472 [2024-12-06 20:47:53.558910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.558969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.558977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.472 [2024-12-06 20:47:53.558984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.472 [2024-12-06 20:47:53.558989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.559276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.559288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.472 [2024-12-06 20:47:53.559295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:36.472 [2024-12-06 20:47:53.559304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.559408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.559415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.472 [2024-12-06 20:47:53.559421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:36.472 [2024-12-06 20:47:53.559426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.570407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.570434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.472 [2024-12-06 20:47:53.570441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.965 ms 00:20:36.472 [2024-12-06 20:47:53.570447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.580181] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:36.472 [2024-12-06 20:47:53.580211] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.472 [2024-12-06 20:47:53.580225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.580231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.472 [2024-12-06 20:47:53.580238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.694 ms 00:20:36.472 [2024-12-06 20:47:53.580243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.472 [2024-12-06 20:47:53.598915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.472 [2024-12-06 20:47:53.598943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.472 [2024-12-06 20:47:53.598951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.624 ms 00:20:36.472 [2024-12-06 20:47:53.598958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.731 [2024-12-06 20:47:53.607904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.731 [2024-12-06 20:47:53.607930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.731 [2024-12-06 20:47:53.607938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.892 ms 00:20:36.731 [2024-12-06 20:47:53.607943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.731 [2024-12-06 20:47:53.616519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.731 [2024-12-06 20:47:53.616544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.731 [2024-12-06 20:47:53.616551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.534 ms 00:20:36.731 [2024-12-06 20:47:53.616557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.731 [2024-12-06 20:47:53.617024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.731 [2024-12-06 20:47:53.617072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.731 [2024-12-06 20:47:53.617081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:20:36.731 [2024-12-06 20:47:53.617087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.661226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.661267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:36.732 [2024-12-06 20:47:53.661278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.120 ms 00:20:36.732 [2024-12-06 20:47:53.661284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.669123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:36.732 [2024-12-06 20:47:53.681129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.681161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.732 [2024-12-06 20:47:53.681171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.777 ms 00:20:36.732 [2024-12-06 20:47:53.681181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.681252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.681260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:36.732 [2024-12-06 20:47:53.681267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:36.732 [2024-12-06 20:47:53.681273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.681309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.681316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.732 [2024-12-06 20:47:53.681322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:36.732 [2024-12-06 20:47:53.681330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.681354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.681361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.732 [2024-12-06 20:47:53.681367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:36.732 [2024-12-06 20:47:53.681372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.681395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:36.732 [2024-12-06 20:47:53.681402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.681409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:36.732 [2024-12-06 20:47:53.681414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.732 [2024-12-06 20:47:53.681420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.699128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.699157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.732 [2024-12-06 20:47:53.699165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.694 ms 00:20:36.732 [2024-12-06 20:47:53.699171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.699239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.732 [2024-12-06 20:47:53.699247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.732 [2024-12-06 20:47:53.699254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:36.732 [2024-12-06 20:47:53.699260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.732 [2024-12-06 20:47:53.699883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.732 [2024-12-06 20:47:53.702226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.245 ms, result 0 00:20:36.732 [2024-12-06 20:47:53.702755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.732 [2024-12-06 20:47:53.717621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.667  [2024-12-06T20:47:55.733Z] Copying: 20/256 [MB] (20 MBps) [2024-12-06T20:47:57.121Z] Copying: 35/256 [MB] (14 MBps) [2024-12-06T20:47:58.053Z] Copying: 57/256 [MB] (22 MBps) [2024-12-06T20:47:58.985Z] Copying: 75/256 [MB] (18 MBps) [2024-12-06T20:47:59.920Z] Copying: 98/256 [MB] (23 MBps) [2024-12-06T20:48:00.855Z] Copying: 128/256 [MB] (29 MBps) [2024-12-06T20:48:01.790Z] Copying: 153/256 [MB] (25 MBps) [2024-12-06T20:48:02.724Z] Copying: 179/256 [MB] (26 MBps) [2024-12-06T20:48:04.099Z] Copying: 198/256 [MB] (19 MBps) [2024-12-06T20:48:05.030Z] Copying: 219/256 [MB] (20 MBps) [2024-12-06T20:48:05.961Z] Copying: 234/256 [MB] (15 MBps) [2024-12-06T20:48:06.220Z] Copying: 250/256 [MB] (15 MBps) [2024-12-06T20:48:06.220Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-06 20:48:05.962774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.087 [2024-12-06 20:48:05.972017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:05.972050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.087 [2024-12-06 20:48:05.972069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.087 [2024-12-06 20:48:05.972077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:05.972097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:49.087 [2024-12-06 20:48:05.974682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:05.974707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.087 [2024-12-06 20:48:05.974717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:20:49.087 [2024-12-06 20:48:05.974725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:05.974984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:05.974994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.087 [2024-12-06 20:48:05.975002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:20:49.087 [2024-12-06 20:48:05.975010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:05.978698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:05.978717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.087 [2024-12-06 20:48:05.978727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:20:49.087 [2024-12-06 20:48:05.978735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:05.985975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:05.985998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.087 [2024-12-06 20:48:05.986008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.224 ms 00:20:49.087 [2024-12-06 20:48:05.986016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.009197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.009324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.087 [2024-12-06 20:48:06.009342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.116 ms 00:20:49.087 [2024-12-06 20:48:06.009349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.022929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.022961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.087 [2024-12-06 20:48:06.022976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.548 ms 00:20:49.087 [2024-12-06 20:48:06.022983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.023114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.023124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.087 [2024-12-06 20:48:06.023139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:49.087 [2024-12-06 20:48:06.023146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.046403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.046432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.087 [2024-12-06 20:48:06.046441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.241 ms 00:20:49.087 [2024-12-06 20:48:06.046449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.069195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.069223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.087 [2024-12-06 20:48:06.069233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.715 ms 00:20:49.087 [2024-12-06 20:48:06.069239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.091651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.091678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.087 [2024-12-06 20:48:06.091688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.381 ms 00:20:49.087 [2024-12-06 20:48:06.091695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.114329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.087 [2024-12-06 20:48:06.114359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.087 [2024-12-06 20:48:06.114369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.564 ms 00:20:49.087 [2024-12-06 20:48:06.114376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.087 [2024-12-06 20:48:06.114409] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.087 [2024-12-06 20:48:06.114423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.087 [2024-12-06 20:48:06.114570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.114993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.088 [2024-12-06 20:48:06.115178] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.088 [2024-12-06 20:48:06.115185] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:49.088 [2024-12-06 20:48:06.115193] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:49.088 [2024-12-06 20:48:06.115200] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:49.088 [2024-12-06 20:48:06.115207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:49.088 [2024-12-06 20:48:06.115214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:49.088 [2024-12-06 20:48:06.115220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.088 [2024-12-06 20:48:06.115227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.088 [2024-12-06 20:48:06.115236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.088 [2024-12-06 20:48:06.115243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.088 [2024-12-06 20:48:06.115249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.088 [2024-12-06 20:48:06.115255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.089 [2024-12-06 20:48:06.115262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.089 [2024-12-06 20:48:06.115270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:20:49.089 [2024-12-06 20:48:06.115278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.089 [2024-12-06 20:48:06.127412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.089 [2024-12-06 20:48:06.127439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.089 [2024-12-06 20:48:06.127449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.108 ms 00:20:49.089 [2024-12-06 20:48:06.127456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.089 [2024-12-06 20:48:06.127801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.089 [2024-12-06 20:48:06.127809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.089 [2024-12-06 20:48:06.127817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:49.089 [2024-12-06 20:48:06.127824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.089 [2024-12-06 20:48:06.162688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.089 [2024-12-06 20:48:06.162724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.089 [2024-12-06 20:48:06.162735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.089 [2024-12-06 20:48:06.162747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.089 [2024-12-06 20:48:06.162815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.089 [2024-12-06 20:48:06.162824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.089 [2024-12-06 20:48:06.162831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.089 [2024-12-06 20:48:06.162839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.089 [2024-12-06 20:48:06.162883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.089 [2024-12-06 20:48:06.162907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.089 [2024-12-06 20:48:06.162915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.089 [2024-12-06 20:48:06.162922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.089 [2024-12-06 20:48:06.162941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.089 [2024-12-06 20:48:06.162949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.089 [2024-12-06 20:48:06.162956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.089 [2024-12-06 20:48:06.162962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.240002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.240039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.347 [2024-12-06 20:48:06.240049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.240056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.347 [2024-12-06 20:48:06.303436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.347 [2024-12-06 20:48:06.303507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.347 [2024-12-06 20:48:06.303561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.347 [2024-12-06 20:48:06.303669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.347 [2024-12-06 20:48:06.303725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.347 [2024-12-06 20:48:06.303784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.303832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.347 [2024-12-06 20:48:06.303844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.347 [2024-12-06 20:48:06.303851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.347 [2024-12-06 20:48:06.303858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.347 [2024-12-06 20:48:06.304009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.982 ms, result 0 00:20:49.913 00:20:49.913 00:20:49.913 20:48:06 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:49.913 20:48:07 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:50.482 20:48:07 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:50.740 [2024-12-06 20:48:07.615184] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:50.740 [2024-12-06 20:48:07.615300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76842 ] 00:20:50.740 [2024-12-06 20:48:07.774643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.740 [2024-12-06 20:48:07.868699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.000 [2024-12-06 20:48:08.125320] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.000 [2024-12-06 20:48:08.125385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.262 [2024-12-06 20:48:08.283658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.283702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:51.262 [2024-12-06 20:48:08.283714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.262 [2024-12-06 20:48:08.283722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.286395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.286428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.262 [2024-12-06 20:48:08.286438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:20:51.262 [2024-12-06 20:48:08.286445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.286513] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:51.262 [2024-12-06 20:48:08.287531] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:51.262 [2024-12-06 20:48:08.287605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.287616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.262 [2024-12-06 20:48:08.287625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:20:51.262 [2024-12-06 20:48:08.287633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.288836] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:51.262 [2024-12-06 20:48:08.301770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.301810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:51.262 [2024-12-06 20:48:08.301821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.935 ms 00:20:51.262 [2024-12-06 20:48:08.301828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.301926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.301938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:51.262 [2024-12-06 20:48:08.301947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:51.262 [2024-12-06 20:48:08.301954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.306750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.306779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.262 [2024-12-06 20:48:08.306789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.755 ms 00:20:51.262 [2024-12-06 20:48:08.306797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.306884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.306922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.262 [2024-12-06 20:48:08.306931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:51.262 [2024-12-06 20:48:08.306938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.306965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.306973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:51.262 [2024-12-06 20:48:08.306981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:51.262 [2024-12-06 20:48:08.306988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.307008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:51.262 [2024-12-06 20:48:08.310269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.310295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.262 [2024-12-06 20:48:08.310304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:20:51.262 [2024-12-06 20:48:08.310311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.310345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.262 [2024-12-06 20:48:08.310354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:51.262 [2024-12-06 20:48:08.310362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:51.262 [2024-12-06 20:48:08.310369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.262 [2024-12-06 20:48:08.310387] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:51.262 [2024-12-06 20:48:08.310405] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:51.262 [2024-12-06 20:48:08.310439] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:51.262 [2024-12-06 20:48:08.310453] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:51.263 [2024-12-06 20:48:08.310554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:51.263 [2024-12-06 20:48:08.310564] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:51.263 [2024-12-06 20:48:08.310574] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:51.263 [2024-12-06 20:48:08.310586] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:51.263 [2024-12-06 20:48:08.310594] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:51.263 [2024-12-06 20:48:08.310602] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:51.263 [2024-12-06 20:48:08.310609] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:51.263 [2024-12-06 20:48:08.310616] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:51.263 [2024-12-06 20:48:08.310623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:51.263 [2024-12-06 20:48:08.310630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.263 [2024-12-06 20:48:08.310637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:51.263 [2024-12-06 20:48:08.310644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:51.263 [2024-12-06 20:48:08.310651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.263 [2024-12-06 20:48:08.310737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.263 [2024-12-06 20:48:08.310747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:51.263 [2024-12-06 20:48:08.310754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:51.263 [2024-12-06 20:48:08.310761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.263 [2024-12-06 20:48:08.310862] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:51.263 [2024-12-06 20:48:08.310872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:51.263 [2024-12-06 20:48:08.310880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.263 [2024-12-06 20:48:08.310904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.310912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:51.263 [2024-12-06 20:48:08.310919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.310925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:51.263 [2024-12-06 20:48:08.310932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:51.263 [2024-12-06 20:48:08.310940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:51.263 [2024-12-06 20:48:08.310946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.263 [2024-12-06 20:48:08.310953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:51.263 [2024-12-06 20:48:08.310966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:51.263 [2024-12-06 20:48:08.310973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.263 [2024-12-06 20:48:08.310980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:51.263 [2024-12-06 20:48:08.310987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:51.263 [2024-12-06 20:48:08.310993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:51.263 [2024-12-06 20:48:08.311006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:51.263 [2024-12-06 20:48:08.311026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:51.263 [2024-12-06 20:48:08.311045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:51.263 [2024-12-06 20:48:08.311065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:51.263 [2024-12-06 20:48:08.311083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:51.263 [2024-12-06 20:48:08.311102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.263 [2024-12-06 20:48:08.311114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:51.263 [2024-12-06 20:48:08.311120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:51.263 [2024-12-06 20:48:08.311126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.263 [2024-12-06 20:48:08.311133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:51.263 [2024-12-06 20:48:08.311139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:51.263 [2024-12-06 20:48:08.311145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:51.263 [2024-12-06 20:48:08.311158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:51.263 [2024-12-06 20:48:08.311165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311172] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:51.263 [2024-12-06 20:48:08.311179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:51.263 [2024-12-06 20:48:08.311188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.263 [2024-12-06 20:48:08.311204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:51.263 [2024-12-06 20:48:08.311211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:51.263 [2024-12-06 20:48:08.311218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:51.263 [2024-12-06 20:48:08.311224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:51.263 [2024-12-06 20:48:08.311230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:51.263 [2024-12-06 20:48:08.311237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:51.263 [2024-12-06 20:48:08.311245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:51.263 [2024-12-06 20:48:08.311253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:51.263 [2024-12-06 20:48:08.311268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:51.263 [2024-12-06 20:48:08.311275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:51.263 [2024-12-06 20:48:08.311282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:51.263 [2024-12-06 20:48:08.311288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:51.263 [2024-12-06 20:48:08.311295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:51.263 [2024-12-06 20:48:08.311301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:51.263 [2024-12-06 20:48:08.311308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:51.263 [2024-12-06 20:48:08.311315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:51.263 [2024-12-06 20:48:08.311322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:51.263 [2024-12-06 20:48:08.311356] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:51.263 [2024-12-06 20:48:08.311364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:51.263 [2024-12-06 20:48:08.311378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:51.263 [2024-12-06 20:48:08.311385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:51.263 [2024-12-06 20:48:08.311392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:51.263 [2024-12-06 20:48:08.311400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.263 [2024-12-06 20:48:08.311409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:51.263 [2024-12-06 20:48:08.311417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:20:51.263 [2024-12-06 20:48:08.311423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.263 [2024-12-06 20:48:08.337304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.263 [2024-12-06 20:48:08.337446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.264 [2024-12-06 20:48:08.337461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.819 ms 00:20:51.264 [2024-12-06 20:48:08.337469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.264 [2024-12-06 20:48:08.337589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.264 [2024-12-06 20:48:08.337599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:51.264 [2024-12-06 20:48:08.337607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:51.264 [2024-12-06 20:48:08.337614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.264 [2024-12-06 20:48:08.376549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.264 [2024-12-06 20:48:08.376585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.264 [2024-12-06 20:48:08.376599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.914 ms 00:20:51.264 [2024-12-06 20:48:08.376607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.264 [2024-12-06 20:48:08.376692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.264 [2024-12-06 20:48:08.376703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.264 [2024-12-06 20:48:08.376712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:51.264 [2024-12-06 20:48:08.376719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.264 [2024-12-06 20:48:08.377060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.264 [2024-12-06 20:48:08.377075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.264 [2024-12-06 20:48:08.377089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:51.264 [2024-12-06 20:48:08.377096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.264 [2024-12-06 20:48:08.377219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.264 [2024-12-06 20:48:08.377228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.264 [2024-12-06 20:48:08.377236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:51.264 [2024-12-06 20:48:08.377243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.264 [2024-12-06 20:48:08.390635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.264 [2024-12-06 20:48:08.390667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.264 [2024-12-06 20:48:08.390676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.373 ms 00:20:51.264 [2024-12-06 20:48:08.390683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.403513] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:51.524 [2024-12-06 20:48:08.403545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:51.524 [2024-12-06 20:48:08.403556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.403564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:51.524 [2024-12-06 20:48:08.403572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:20:51.524 [2024-12-06 20:48:08.403578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.427699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.427730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:51.524 [2024-12-06 20:48:08.427741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.054 ms 00:20:51.524 [2024-12-06 20:48:08.427749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.439259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.439286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:51.524 [2024-12-06 20:48:08.439296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.445 ms 00:20:51.524 [2024-12-06 20:48:08.439302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.450745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.450870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:51.524 [2024-12-06 20:48:08.450885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.382 ms 00:20:51.524 [2024-12-06 20:48:08.450907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.451507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.451525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:51.524 [2024-12-06 20:48:08.451534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:20:51.524 [2024-12-06 20:48:08.451541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.506731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.506776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:51.524 [2024-12-06 20:48:08.506788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.167 ms 00:20:51.524 [2024-12-06 20:48:08.506797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.517064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:51.524 [2024-12-06 20:48:08.530882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.530924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:51.524 [2024-12-06 20:48:08.530935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.994 ms 00:20:51.524 [2024-12-06 20:48:08.530947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.531026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.531036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:51.524 [2024-12-06 20:48:08.531045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:51.524 [2024-12-06 20:48:08.531052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.531094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.531103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:51.524 [2024-12-06 20:48:08.531111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:51.524 [2024-12-06 20:48:08.531121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.524 [2024-12-06 20:48:08.531151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.524 [2024-12-06 20:48:08.531160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:51.524 [2024-12-06 20:48:08.531168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:51.525 [2024-12-06 20:48:08.531175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.525 [2024-12-06 20:48:08.531205] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:51.525 [2024-12-06 20:48:08.531215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.525 [2024-12-06 20:48:08.531222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:51.525 [2024-12-06 20:48:08.531230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:51.525 [2024-12-06 20:48:08.531238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.525 [2024-12-06 20:48:08.555082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.525 [2024-12-06 20:48:08.555114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:51.525 [2024-12-06 20:48:08.555126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:20:51.525 [2024-12-06 20:48:08.555134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.525 [2024-12-06 20:48:08.555215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.525 [2024-12-06 20:48:08.555225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:51.525 [2024-12-06 20:48:08.555234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:51.525 [2024-12-06 20:48:08.555241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.525 [2024-12-06 20:48:08.555983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.525 [2024-12-06 20:48:08.558946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.031 ms, result 0 00:20:51.525 [2024-12-06 20:48:08.559933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.525 [2024-12-06 20:48:08.572686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:52.095  [2024-12-06T20:48:09.228Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-12-06 20:48:08.945849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.095 [2024-12-06 20:48:08.954418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:08.954450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.095 [2024-12-06 20:48:08.954467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:52.095 [2024-12-06 20:48:08.954475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:08.954494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:52.095 [2024-12-06 20:48:08.957060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:08.957184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.095 [2024-12-06 20:48:08.957199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:20:52.095 [2024-12-06 20:48:08.957207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:08.959572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:08.959600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.095 [2024-12-06 20:48:08.959610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:20:52.095 [2024-12-06 20:48:08.959617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:08.964031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:08.964131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.095 [2024-12-06 20:48:08.964144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:20:52.095 [2024-12-06 20:48:08.964152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:08.971035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:08.971133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.095 [2024-12-06 20:48:08.971148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.856 ms 00:20:52.095 [2024-12-06 20:48:08.971156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:08.994635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:08.994745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.095 [2024-12-06 20:48:08.994759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.425 ms 00:20:52.095 [2024-12-06 20:48:08.994766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.008469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:09.008503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.095 [2024-12-06 20:48:09.008514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.675 ms 00:20:52.095 [2024-12-06 20:48:09.008521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.008668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:09.008679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.095 [2024-12-06 20:48:09.008693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:52.095 [2024-12-06 20:48:09.008700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.032045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:09.032160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.095 [2024-12-06 20:48:09.032175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.330 ms 00:20:52.095 [2024-12-06 20:48:09.032182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.055305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:09.055413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.095 [2024-12-06 20:48:09.055426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.081 ms 00:20:52.095 [2024-12-06 20:48:09.055433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.077996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:09.078097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.095 [2024-12-06 20:48:09.078110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.534 ms 00:20:52.095 [2024-12-06 20:48:09.078117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.100442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.095 [2024-12-06 20:48:09.100471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.095 [2024-12-06 20:48:09.100481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.274 ms 00:20:52.095 [2024-12-06 20:48:09.100487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.095 [2024-12-06 20:48:09.100518] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.095 [2024-12-06 20:48:09.100531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.095 [2024-12-06 20:48:09.100796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.100997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.096 [2024-12-06 20:48:09.101316] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.096 [2024-12-06 20:48:09.101323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:52.096 [2024-12-06 20:48:09.101330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.096 [2024-12-06 20:48:09.101337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.096 [2024-12-06 20:48:09.101344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.096 [2024-12-06 20:48:09.101351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.096 [2024-12-06 20:48:09.101358] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.096 [2024-12-06 20:48:09.101365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.096 [2024-12-06 20:48:09.101374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.096 [2024-12-06 20:48:09.101380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.096 [2024-12-06 20:48:09.101386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.096 [2024-12-06 20:48:09.101392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.096 [2024-12-06 20:48:09.101399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.096 [2024-12-06 20:48:09.101407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:20:52.096 [2024-12-06 20:48:09.101413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.096 [2024-12-06 20:48:09.113461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.096 [2024-12-06 20:48:09.113489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.096 [2024-12-06 20:48:09.113499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.031 ms 00:20:52.096 [2024-12-06 20:48:09.113507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.096 [2024-12-06 20:48:09.113865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.096 [2024-12-06 20:48:09.113879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.096 [2024-12-06 20:48:09.113905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:52.096 [2024-12-06 20:48:09.113912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.096 [2024-12-06 20:48:09.148740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.096 [2024-12-06 20:48:09.148850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.096 [2024-12-06 20:48:09.148864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.096 [2024-12-06 20:48:09.148877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.096 [2024-12-06 20:48:09.148954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.096 [2024-12-06 20:48:09.148963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.096 [2024-12-06 20:48:09.148970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.096 [2024-12-06 20:48:09.148978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.096 [2024-12-06 20:48:09.149020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.097 [2024-12-06 20:48:09.149029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.097 [2024-12-06 20:48:09.149037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.097 [2024-12-06 20:48:09.149044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.097 [2024-12-06 20:48:09.149063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.097 [2024-12-06 20:48:09.149071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.097 [2024-12-06 20:48:09.149078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.097 [2024-12-06 20:48:09.149084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.226205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.226243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.356 [2024-12-06 20:48:09.226253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.226265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.356 [2024-12-06 20:48:09.289531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.356 [2024-12-06 20:48:09.289600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.356 [2024-12-06 20:48:09.289655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.356 [2024-12-06 20:48:09.289762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.356 [2024-12-06 20:48:09.289820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.356 [2024-12-06 20:48:09.289876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.289947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.356 [2024-12-06 20:48:09.289959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.356 [2024-12-06 20:48:09.289967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.356 [2024-12-06 20:48:09.289974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.356 [2024-12-06 20:48:09.290109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.680 ms, result 0 00:20:52.923 00:20:52.923 00:20:52.923 20:48:09 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:52.923 20:48:09 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76867 00:20:52.923 20:48:09 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76867 00:20:52.923 20:48:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76867 ']' 00:20:52.923 20:48:09 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.923 20:48:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.923 20:48:09 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.923 20:48:09 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.923 20:48:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:53.181 [2024-12-06 20:48:10.100709] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:53.181 [2024-12-06 20:48:10.101336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76867 ] 00:20:53.181 [2024-12-06 20:48:10.278600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.440 [2024-12-06 20:48:10.375153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.008 20:48:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.008 20:48:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:54.008 20:48:10 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:54.267 [2024-12-06 20:48:11.173908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.267 [2024-12-06 20:48:11.174092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.267 [2024-12-06 20:48:11.348100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.348254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.267 [2024-12-06 20:48:11.348326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.267 [2024-12-06 20:48:11.348351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.267 [2024-12-06 20:48:11.351016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.351125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.267 [2024-12-06 20:48:11.351180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:20:54.267 [2024-12-06 20:48:11.351203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.267 [2024-12-06 20:48:11.351344] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.267 [2024-12-06 20:48:11.352111] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.267 [2024-12-06 20:48:11.352217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.352272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.267 [2024-12-06 20:48:11.352298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:20:54.267 [2024-12-06 20:48:11.352317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.267 [2024-12-06 20:48:11.354305] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.267 [2024-12-06 20:48:11.367032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.367159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.267 [2024-12-06 20:48:11.367214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:20:54.267 [2024-12-06 20:48:11.367239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.267 [2024-12-06 20:48:11.367328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.367359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.267 [2024-12-06 20:48:11.367380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:54.267 [2024-12-06 20:48:11.367400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.267 [2024-12-06 20:48:11.372298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.372403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.267 [2024-12-06 20:48:11.372449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:20:54.267 [2024-12-06 20:48:11.372472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.267 [2024-12-06 20:48:11.372577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.267 [2024-12-06 20:48:11.372604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.267 [2024-12-06 20:48:11.372625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:54.267 [2024-12-06 20:48:11.372648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.268 [2024-12-06 20:48:11.372730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.268 [2024-12-06 20:48:11.372757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.268 [2024-12-06 20:48:11.372777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:54.268 [2024-12-06 20:48:11.372797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.268 [2024-12-06 20:48:11.372830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:54.268 [2024-12-06 20:48:11.375990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.268 [2024-12-06 20:48:11.376078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.268 [2024-12-06 20:48:11.376126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:20:54.268 [2024-12-06 20:48:11.376148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.268 [2024-12-06 20:48:11.376197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.268 [2024-12-06 20:48:11.376236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.268 [2024-12-06 20:48:11.376258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.268 [2024-12-06 20:48:11.376278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.268 [2024-12-06 20:48:11.376310] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.268 [2024-12-06 20:48:11.376341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.268 [2024-12-06 20:48:11.376481] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.268 [2024-12-06 20:48:11.376519] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.268 [2024-12-06 20:48:11.376644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.268 [2024-12-06 20:48:11.376676] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.268 [2024-12-06 20:48:11.376750] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.268 [2024-12-06 20:48:11.376783] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.268 [2024-12-06 20:48:11.376815] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377211] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:54.268 [2024-12-06 20:48:11.377221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.268 [2024-12-06 20:48:11.377229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.268 [2024-12-06 20:48:11.377240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.268 [2024-12-06 20:48:11.377248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.268 [2024-12-06 20:48:11.377256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.268 [2024-12-06 20:48:11.377265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:20:54.268 [2024-12-06 20:48:11.377273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.268 [2024-12-06 20:48:11.377388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.268 [2024-12-06 20:48:11.377398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.268 [2024-12-06 20:48:11.377406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:54.268 [2024-12-06 20:48:11.377415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.268 [2024-12-06 20:48:11.377514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.268 [2024-12-06 20:48:11.377525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.268 [2024-12-06 20:48:11.377534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.268 [2024-12-06 20:48:11.377560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.268 [2024-12-06 20:48:11.377584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.268 [2024-12-06 20:48:11.377599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.268 [2024-12-06 20:48:11.377607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:54.268 [2024-12-06 20:48:11.377614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.268 [2024-12-06 20:48:11.377622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.268 [2024-12-06 20:48:11.377628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:54.268 [2024-12-06 20:48:11.377636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.268 [2024-12-06 20:48:11.377651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.268 [2024-12-06 20:48:11.377677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.268 [2024-12-06 20:48:11.377701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.268 [2024-12-06 20:48:11.377724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.268 [2024-12-06 20:48:11.377748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.268 [2024-12-06 20:48:11.377769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.268 [2024-12-06 20:48:11.377783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.268 [2024-12-06 20:48:11.377791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:54.268 [2024-12-06 20:48:11.377797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.268 [2024-12-06 20:48:11.377805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.268 [2024-12-06 20:48:11.377811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:54.268 [2024-12-06 20:48:11.377821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.268 [2024-12-06 20:48:11.377835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:54.268 [2024-12-06 20:48:11.377841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.268 [2024-12-06 20:48:11.377858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.268 [2024-12-06 20:48:11.377867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.268 [2024-12-06 20:48:11.377883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.268 [2024-12-06 20:48:11.377900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.268 [2024-12-06 20:48:11.377909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.268 [2024-12-06 20:48:11.377915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.268 [2024-12-06 20:48:11.377923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.268 [2024-12-06 20:48:11.377930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.268 [2024-12-06 20:48:11.377940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.268 [2024-12-06 20:48:11.377948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.268 [2024-12-06 20:48:11.377965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:54.268 [2024-12-06 20:48:11.377973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:54.268 [2024-12-06 20:48:11.377981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:54.268 [2024-12-06 20:48:11.377989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:54.268 [2024-12-06 20:48:11.377998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:54.268 [2024-12-06 20:48:11.378006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:54.268 [2024-12-06 20:48:11.378014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:54.268 [2024-12-06 20:48:11.378021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:54.268 [2024-12-06 20:48:11.378030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:54.268 [2024-12-06 20:48:11.378037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:54.268 [2024-12-06 20:48:11.378045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:54.269 [2024-12-06 20:48:11.378052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:54.269 [2024-12-06 20:48:11.378061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:54.269 [2024-12-06 20:48:11.378068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:54.269 [2024-12-06 20:48:11.378076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.269 [2024-12-06 20:48:11.378084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.269 [2024-12-06 20:48:11.378095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.269 [2024-12-06 20:48:11.378102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.269 [2024-12-06 20:48:11.378110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.269 [2024-12-06 20:48:11.378117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.269 [2024-12-06 20:48:11.378127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.269 [2024-12-06 20:48:11.378134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.269 [2024-12-06 20:48:11.378143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:20:54.269 [2024-12-06 20:48:11.378151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.403956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.404063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.529 [2024-12-06 20:48:11.404112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.749 ms 00:20:54.529 [2024-12-06 20:48:11.404136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.404267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.404293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.529 [2024-12-06 20:48:11.404315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:54.529 [2024-12-06 20:48:11.404333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.434565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.434673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.529 [2024-12-06 20:48:11.434721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.198 ms 00:20:54.529 [2024-12-06 20:48:11.434742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.434808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.434831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.529 [2024-12-06 20:48:11.434852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.529 [2024-12-06 20:48:11.434871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.435210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.435253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.529 [2024-12-06 20:48:11.435279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:54.529 [2024-12-06 20:48:11.435298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.435430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.435495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.529 [2024-12-06 20:48:11.435520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:54.529 [2024-12-06 20:48:11.435539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.449776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.449881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.529 [2024-12-06 20:48:11.449943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.203 ms 00:20:54.529 [2024-12-06 20:48:11.449966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.476422] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:54.529 [2024-12-06 20:48:11.476562] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.529 [2024-12-06 20:48:11.476627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.476649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.529 [2024-12-06 20:48:11.476671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.538 ms 00:20:54.529 [2024-12-06 20:48:11.476695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.505192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.505319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.529 [2024-12-06 20:48:11.505380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.140 ms 00:20:54.529 [2024-12-06 20:48:11.505405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.517348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.517480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.529 [2024-12-06 20:48:11.517540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.615 ms 00:20:54.529 [2024-12-06 20:48:11.517564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.529436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.529561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.529 [2024-12-06 20:48:11.529619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.523 ms 00:20:54.529 [2024-12-06 20:48:11.529643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.530321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.530410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.529 [2024-12-06 20:48:11.530427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:20:54.529 [2024-12-06 20:48:11.530434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.585439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.585487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.529 [2024-12-06 20:48:11.585501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.977 ms 00:20:54.529 [2024-12-06 20:48:11.585508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.595733] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:54.529 [2024-12-06 20:48:11.609151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.609190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.529 [2024-12-06 20:48:11.609203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.559 ms 00:20:54.529 [2024-12-06 20:48:11.609213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.609280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.609292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.529 [2024-12-06 20:48:11.609301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.529 [2024-12-06 20:48:11.609310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.609356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.609367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.529 [2024-12-06 20:48:11.609374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:54.529 [2024-12-06 20:48:11.609386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.609408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.609418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.529 [2024-12-06 20:48:11.609425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.529 [2024-12-06 20:48:11.609436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.609467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.529 [2024-12-06 20:48:11.609479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.609489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.529 [2024-12-06 20:48:11.609498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.529 [2024-12-06 20:48:11.609505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.632742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.632774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.529 [2024-12-06 20:48:11.632787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.212 ms 00:20:54.529 [2024-12-06 20:48:11.632795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.632881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.529 [2024-12-06 20:48:11.632915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.529 [2024-12-06 20:48:11.632926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:54.529 [2024-12-06 20:48:11.632936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.529 [2024-12-06 20:48:11.633674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.529 [2024-12-06 20:48:11.636646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.300 ms, result 0 00:20:54.529 [2024-12-06 20:48:11.638500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.788 Some configs were skipped because the RPC state that can call them passed over. 00:20:54.788 20:48:11 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:54.788 [2024-12-06 20:48:11.869022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.788 [2024-12-06 20:48:11.869155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:54.788 [2024-12-06 20:48:11.869210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:20:54.788 [2024-12-06 20:48:11.869235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.788 [2024-12-06 20:48:11.869284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.855 ms, result 0 00:20:54.788 true 00:20:54.788 20:48:11 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:55.047 [2024-12-06 20:48:12.077726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.047 [2024-12-06 20:48:12.077898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.047 [2024-12-06 20:48:12.077953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:20:55.047 [2024-12-06 20:48:12.077975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.047 [2024-12-06 20:48:12.078027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.330 ms, result 0 00:20:55.047 true 00:20:55.047 20:48:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76867 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76867 ']' 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76867 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76867 00:20:55.047 killing process with pid 76867 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76867' 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76867 00:20:55.047 20:48:12 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76867 00:20:55.985 [2024-12-06 20:48:12.816242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.816297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:55.985 [2024-12-06 20:48:12.816310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:55.985 [2024-12-06 20:48:12.816319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.816342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:55.985 [2024-12-06 20:48:12.818932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.818964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:55.985 [2024-12-06 20:48:12.818978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:20:55.985 [2024-12-06 20:48:12.818986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.819271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.819281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:55.985 [2024-12-06 20:48:12.819290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:55.985 [2024-12-06 20:48:12.819298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.823818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.823847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:55.985 [2024-12-06 20:48:12.823860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.499 ms 00:20:55.985 [2024-12-06 20:48:12.823868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.830764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.830883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:55.985 [2024-12-06 20:48:12.830913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:20:55.985 [2024-12-06 20:48:12.830921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.841090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.841124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:55.985 [2024-12-06 20:48:12.841137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.114 ms 00:20:55.985 [2024-12-06 20:48:12.841145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.848876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.848915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:55.985 [2024-12-06 20:48:12.848927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.694 ms 00:20:55.985 [2024-12-06 20:48:12.848936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.849078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.849088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:55.985 [2024-12-06 20:48:12.849098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:55.985 [2024-12-06 20:48:12.849105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.859510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.859538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:55.985 [2024-12-06 20:48:12.859549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.383 ms 00:20:55.985 [2024-12-06 20:48:12.859556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.869532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.869567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:55.985 [2024-12-06 20:48:12.869582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.941 ms 00:20:55.985 [2024-12-06 20:48:12.869589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.878923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.878951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:55.985 [2024-12-06 20:48:12.878962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.299 ms 00:20:55.985 [2024-12-06 20:48:12.878969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.888611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.985 [2024-12-06 20:48:12.888637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:55.985 [2024-12-06 20:48:12.888649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.582 ms 00:20:55.985 [2024-12-06 20:48:12.888655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.985 [2024-12-06 20:48:12.888700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:55.985 [2024-12-06 20:48:12.888714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:55.985 [2024-12-06 20:48:12.888776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.888993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:55.986 [2024-12-06 20:48:12.889557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:55.987 [2024-12-06 20:48:12.889565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:55.987 [2024-12-06 20:48:12.889573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:55.987 [2024-12-06 20:48:12.889593] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:55.987 [2024-12-06 20:48:12.889606] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:20:55.987 [2024-12-06 20:48:12.889616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:55.987 [2024-12-06 20:48:12.889625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:55.987 [2024-12-06 20:48:12.889631] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:55.987 [2024-12-06 20:48:12.889641] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:55.987 [2024-12-06 20:48:12.889648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:55.987 [2024-12-06 20:48:12.889657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:55.987 [2024-12-06 20:48:12.889664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:55.987 [2024-12-06 20:48:12.889672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:55.987 [2024-12-06 20:48:12.889678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:55.987 [2024-12-06 20:48:12.889686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.987 [2024-12-06 20:48:12.889695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:55.987 [2024-12-06 20:48:12.889704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:20:55.987 [2024-12-06 20:48:12.889711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:12.902160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.987 [2024-12-06 20:48:12.902188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:55.987 [2024-12-06 20:48:12.902201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.418 ms 00:20:55.987 [2024-12-06 20:48:12.902208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:12.902567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.987 [2024-12-06 20:48:12.902582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:55.987 [2024-12-06 20:48:12.902594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:55.987 [2024-12-06 20:48:12.902601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:12.946452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:12.946485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.987 [2024-12-06 20:48:12.946498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:12.946507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:12.947718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:12.947745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.987 [2024-12-06 20:48:12.947758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:12.947765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:12.947810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:12.947819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.987 [2024-12-06 20:48:12.947830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:12.947837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:12.947855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:12.947863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.987 [2024-12-06 20:48:12.947872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:12.947881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.009485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.009516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.987 [2024-12-06 20:48:13.009526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.009532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.057714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.057749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.987 [2024-12-06 20:48:13.057758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.057767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.057828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.057835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.987 [2024-12-06 20:48:13.057844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.057850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.057874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.057880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.987 [2024-12-06 20:48:13.057907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.057913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.057984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.057992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.987 [2024-12-06 20:48:13.057999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.058005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.058028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.058035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:55.987 [2024-12-06 20:48:13.058042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.058048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.058080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.058087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.987 [2024-12-06 20:48:13.058095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.058101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.058135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.987 [2024-12-06 20:48:13.058142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.987 [2024-12-06 20:48:13.058150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.987 [2024-12-06 20:48:13.058155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.987 [2024-12-06 20:48:13.058260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 242.003 ms, result 0 00:20:56.554 20:48:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:56.554 [2024-12-06 20:48:13.637419] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:20:56.554 [2024-12-06 20:48:13.637538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76922 ] 00:20:56.813 [2024-12-06 20:48:13.794027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.814 [2024-12-06 20:48:13.871075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.072 [2024-12-06 20:48:14.082478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.072 [2024-12-06 20:48:14.082527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.333 [2024-12-06 20:48:14.230194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.230229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.333 [2024-12-06 20:48:14.230239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.333 [2024-12-06 20:48:14.230245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.232311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.232432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.333 [2024-12-06 20:48:14.232444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:20:57.333 [2024-12-06 20:48:14.232451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.232507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.333 [2024-12-06 20:48:14.233033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.333 [2024-12-06 20:48:14.233050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.233056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.333 [2024-12-06 20:48:14.233063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:57.333 [2024-12-06 20:48:14.233069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.234064] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:57.333 [2024-12-06 20:48:14.243575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.243683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:57.333 [2024-12-06 20:48:14.243696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.513 ms 00:20:57.333 [2024-12-06 20:48:14.243702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.243762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.243771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:57.333 [2024-12-06 20:48:14.243777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:57.333 [2024-12-06 20:48:14.243783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.248246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.248269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.333 [2024-12-06 20:48:14.248275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.434 ms 00:20:57.333 [2024-12-06 20:48:14.248281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.248356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.248363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.333 [2024-12-06 20:48:14.248370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:57.333 [2024-12-06 20:48:14.248375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.248394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.248400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.333 [2024-12-06 20:48:14.248406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:57.333 [2024-12-06 20:48:14.248411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.248427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:57.333 [2024-12-06 20:48:14.251091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.251112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.333 [2024-12-06 20:48:14.251119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:20:57.333 [2024-12-06 20:48:14.251124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.251153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.251160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.333 [2024-12-06 20:48:14.251166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.333 [2024-12-06 20:48:14.251171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.251186] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:57.333 [2024-12-06 20:48:14.251201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:57.333 [2024-12-06 20:48:14.251228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:57.333 [2024-12-06 20:48:14.251239] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:57.333 [2024-12-06 20:48:14.251316] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:57.333 [2024-12-06 20:48:14.251323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.333 [2024-12-06 20:48:14.251331] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:57.333 [2024-12-06 20:48:14.251340] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.333 [2024-12-06 20:48:14.251347] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.333 [2024-12-06 20:48:14.251353] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:57.333 [2024-12-06 20:48:14.251359] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.333 [2024-12-06 20:48:14.251364] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:57.333 [2024-12-06 20:48:14.251370] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:57.333 [2024-12-06 20:48:14.251376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.251381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.333 [2024-12-06 20:48:14.251387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:57.333 [2024-12-06 20:48:14.251392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.333 [2024-12-06 20:48:14.251458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.333 [2024-12-06 20:48:14.251467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.334 [2024-12-06 20:48:14.251473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:57.334 [2024-12-06 20:48:14.251478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.334 [2024-12-06 20:48:14.251552] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.334 [2024-12-06 20:48:14.251559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.334 [2024-12-06 20:48:14.251565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.334 [2024-12-06 20:48:14.251582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.334 [2024-12-06 20:48:14.251599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.334 [2024-12-06 20:48:14.251609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.334 [2024-12-06 20:48:14.251618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:57.334 [2024-12-06 20:48:14.251623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.334 [2024-12-06 20:48:14.251628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.334 [2024-12-06 20:48:14.251635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:57.334 [2024-12-06 20:48:14.251640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.334 [2024-12-06 20:48:14.251650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.334 [2024-12-06 20:48:14.251666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.334 [2024-12-06 20:48:14.251682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.334 [2024-12-06 20:48:14.251697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.334 [2024-12-06 20:48:14.251712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.334 [2024-12-06 20:48:14.251727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.334 [2024-12-06 20:48:14.251737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.334 [2024-12-06 20:48:14.251742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:57.334 [2024-12-06 20:48:14.251747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.334 [2024-12-06 20:48:14.251752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:57.334 [2024-12-06 20:48:14.251757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:57.334 [2024-12-06 20:48:14.251762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:57.334 [2024-12-06 20:48:14.251772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:57.334 [2024-12-06 20:48:14.251777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251782] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.334 [2024-12-06 20:48:14.251788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.334 [2024-12-06 20:48:14.251796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.334 [2024-12-06 20:48:14.251808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.334 [2024-12-06 20:48:14.251814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.334 [2024-12-06 20:48:14.251819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.334 [2024-12-06 20:48:14.251824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.334 [2024-12-06 20:48:14.251829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.334 [2024-12-06 20:48:14.251834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.334 [2024-12-06 20:48:14.251840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.334 [2024-12-06 20:48:14.251847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:57.334 [2024-12-06 20:48:14.251859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:57.334 [2024-12-06 20:48:14.251864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:57.334 [2024-12-06 20:48:14.251869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:57.334 [2024-12-06 20:48:14.251875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:57.334 [2024-12-06 20:48:14.251880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:57.334 [2024-12-06 20:48:14.251885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:57.334 [2024-12-06 20:48:14.251905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:57.334 [2024-12-06 20:48:14.251911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:57.334 [2024-12-06 20:48:14.251917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:57.334 [2024-12-06 20:48:14.251945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.334 [2024-12-06 20:48:14.251951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.334 [2024-12-06 20:48:14.251964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.334 [2024-12-06 20:48:14.251969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.334 [2024-12-06 20:48:14.251975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.334 [2024-12-06 20:48:14.251989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.334 [2024-12-06 20:48:14.251996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.334 [2024-12-06 20:48:14.252002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:20:57.334 [2024-12-06 20:48:14.252008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.334 [2024-12-06 20:48:14.273017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.334 [2024-12-06 20:48:14.273045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.334 [2024-12-06 20:48:14.273053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.965 ms 00:20:57.334 [2024-12-06 20:48:14.273059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.334 [2024-12-06 20:48:14.273151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.334 [2024-12-06 20:48:14.273159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.334 [2024-12-06 20:48:14.273165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:57.334 [2024-12-06 20:48:14.273171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.334 [2024-12-06 20:48:14.318744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.334 [2024-12-06 20:48:14.318776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.334 [2024-12-06 20:48:14.318788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.557 ms 00:20:57.334 [2024-12-06 20:48:14.318794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.334 [2024-12-06 20:48:14.318852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.334 [2024-12-06 20:48:14.318861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.334 [2024-12-06 20:48:14.318869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:57.334 [2024-12-06 20:48:14.318874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.334 [2024-12-06 20:48:14.319178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.334 [2024-12-06 20:48:14.319190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.335 [2024-12-06 20:48:14.319197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:57.335 [2024-12-06 20:48:14.319206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.319309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.319317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.335 [2024-12-06 20:48:14.319323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:57.335 [2024-12-06 20:48:14.319328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.330135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.330159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.335 [2024-12-06 20:48:14.330167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.790 ms 00:20:57.335 [2024-12-06 20:48:14.330173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.339742] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:57.335 [2024-12-06 20:48:14.339770] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.335 [2024-12-06 20:48:14.339779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.339786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.335 [2024-12-06 20:48:14.339793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.520 ms 00:20:57.335 [2024-12-06 20:48:14.339798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.357982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.358009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:57.335 [2024-12-06 20:48:14.358017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.138 ms 00:20:57.335 [2024-12-06 20:48:14.358024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.366615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.366640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:57.335 [2024-12-06 20:48:14.366647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:20:57.335 [2024-12-06 20:48:14.366652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.375146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.375171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:57.335 [2024-12-06 20:48:14.375179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.454 ms 00:20:57.335 [2024-12-06 20:48:14.375184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.375639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.375662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.335 [2024-12-06 20:48:14.375670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:57.335 [2024-12-06 20:48:14.375675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.418957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.418995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:57.335 [2024-12-06 20:48:14.419006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.265 ms 00:20:57.335 [2024-12-06 20:48:14.419013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.426649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:57.335 [2024-12-06 20:48:14.438027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.438054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.335 [2024-12-06 20:48:14.438063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.953 ms 00:20:57.335 [2024-12-06 20:48:14.438074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.438138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.438146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:57.335 [2024-12-06 20:48:14.438153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.335 [2024-12-06 20:48:14.438159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.438195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.438201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.335 [2024-12-06 20:48:14.438208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:57.335 [2024-12-06 20:48:14.438217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.438241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.438247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.335 [2024-12-06 20:48:14.438253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.335 [2024-12-06 20:48:14.438259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.438281] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:57.335 [2024-12-06 20:48:14.438288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.438294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:57.335 [2024-12-06 20:48:14.438300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.335 [2024-12-06 20:48:14.438305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.456033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.456062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.335 [2024-12-06 20:48:14.456071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.714 ms 00:20:57.335 [2024-12-06 20:48:14.456077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.456144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.335 [2024-12-06 20:48:14.456152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.335 [2024-12-06 20:48:14.456158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:57.335 [2024-12-06 20:48:14.456164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.335 [2024-12-06 20:48:14.456788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:57.335 [2024-12-06 20:48:14.459090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.375 ms, result 0 00:20:57.335 [2024-12-06 20:48:14.459830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.594 [2024-12-06 20:48:14.474584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.528  [2024-12-06T20:48:16.594Z] Copying: 20/256 [MB] (20 MBps) [2024-12-06T20:48:17.526Z] Copying: 32/256 [MB] (11 MBps) [2024-12-06T20:48:18.897Z] Copying: 44/256 [MB] (11 MBps) [2024-12-06T20:48:19.832Z] Copying: 55/256 [MB] (11 MBps) [2024-12-06T20:48:20.767Z] Copying: 66/256 [MB] (11 MBps) [2024-12-06T20:48:21.704Z] Copying: 82/256 [MB] (15 MBps) [2024-12-06T20:48:22.639Z] Copying: 93/256 [MB] (10 MBps) [2024-12-06T20:48:23.654Z] Copying: 103/256 [MB] (10 MBps) [2024-12-06T20:48:24.589Z] Copying: 115/256 [MB] (11 MBps) [2024-12-06T20:48:25.524Z] Copying: 127/256 [MB] (11 MBps) [2024-12-06T20:48:26.901Z] Copying: 143/256 [MB] (16 MBps) [2024-12-06T20:48:27.842Z] Copying: 160/256 [MB] (16 MBps) [2024-12-06T20:48:28.786Z] Copying: 180/256 [MB] (20 MBps) [2024-12-06T20:48:29.728Z] Copying: 197/256 [MB] (17 MBps) [2024-12-06T20:48:30.664Z] Copying: 214/256 [MB] (16 MBps) [2024-12-06T20:48:31.603Z] Copying: 227/256 [MB] (12 MBps) [2024-12-06T20:48:32.546Z] Copying: 238/256 [MB] (11 MBps) [2024-12-06T20:48:33.489Z] Copying: 248/256 [MB] (10 MBps) [2024-12-06T20:48:33.753Z] Copying: 256/256 [MB] (average 13 MBps)[2024-12-06 20:48:33.520711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:16.620 [2024-12-06 20:48:33.531641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.531856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:16.620 [2024-12-06 20:48:33.531924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:16.620 [2024-12-06 20:48:33.531935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.531975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:16.620 [2024-12-06 20:48:33.534984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.535027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:16.620 [2024-12-06 20:48:33.535040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:21:16.620 [2024-12-06 20:48:33.535049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.535348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.535360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:16.620 [2024-12-06 20:48:33.535369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:16.620 [2024-12-06 20:48:33.535378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.539093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.539120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:16.620 [2024-12-06 20:48:33.539131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:21:16.620 [2024-12-06 20:48:33.539139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.546860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.547049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:16.620 [2024-12-06 20:48:33.547072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.700 ms 00:21:16.620 [2024-12-06 20:48:33.547083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.575113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.575167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:16.620 [2024-12-06 20:48:33.575181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.941 ms 00:21:16.620 [2024-12-06 20:48:33.575190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.591321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.591369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:16.620 [2024-12-06 20:48:33.591390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.077 ms 00:21:16.620 [2024-12-06 20:48:33.591398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.591565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.591578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:16.620 [2024-12-06 20:48:33.591597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:16.620 [2024-12-06 20:48:33.591605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.617414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.617461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:16.620 [2024-12-06 20:48:33.617474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.789 ms 00:21:16.620 [2024-12-06 20:48:33.617482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.642901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.642949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:16.620 [2024-12-06 20:48:33.642961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.340 ms 00:21:16.620 [2024-12-06 20:48:33.642970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.667805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.667853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:16.620 [2024-12-06 20:48:33.667866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.786 ms 00:21:16.620 [2024-12-06 20:48:33.667875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.692521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.620 [2024-12-06 20:48:33.692568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:16.620 [2024-12-06 20:48:33.692579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.547 ms 00:21:16.620 [2024-12-06 20:48:33.692588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.620 [2024-12-06 20:48:33.692635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:16.620 [2024-12-06 20:48:33.692651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:16.620 [2024-12-06 20:48:33.692814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.692998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:16.621 [2024-12-06 20:48:33.693420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:16.622 [2024-12-06 20:48:33.693521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:16.622 [2024-12-06 20:48:33.693530] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6b6440-c21e-40a5-a295-d460d8302bed 00:21:16.622 [2024-12-06 20:48:33.693539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:16.622 [2024-12-06 20:48:33.693547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:16.622 [2024-12-06 20:48:33.693555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:16.622 [2024-12-06 20:48:33.693564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:16.622 [2024-12-06 20:48:33.693571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:16.622 [2024-12-06 20:48:33.693579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:16.622 [2024-12-06 20:48:33.693591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:16.622 [2024-12-06 20:48:33.693598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:16.622 [2024-12-06 20:48:33.693605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:16.622 [2024-12-06 20:48:33.693613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.622 [2024-12-06 20:48:33.693621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:16.622 [2024-12-06 20:48:33.693631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:21:16.622 [2024-12-06 20:48:33.693638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.622 [2024-12-06 20:48:33.707420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.622 [2024-12-06 20:48:33.707462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:16.622 [2024-12-06 20:48:33.707474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.745 ms 00:21:16.622 [2024-12-06 20:48:33.707482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.622 [2024-12-06 20:48:33.707928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.622 [2024-12-06 20:48:33.707940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:16.622 [2024-12-06 20:48:33.707950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:21:16.622 [2024-12-06 20:48:33.707959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.622 [2024-12-06 20:48:33.747398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.622 [2024-12-06 20:48:33.747447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.622 [2024-12-06 20:48:33.747460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.622 [2024-12-06 20:48:33.747474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.622 [2024-12-06 20:48:33.747594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.622 [2024-12-06 20:48:33.747605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.622 [2024-12-06 20:48:33.747614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.622 [2024-12-06 20:48:33.747623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.622 [2024-12-06 20:48:33.747678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.622 [2024-12-06 20:48:33.747689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.622 [2024-12-06 20:48:33.747698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.622 [2024-12-06 20:48:33.747706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.622 [2024-12-06 20:48:33.747729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.622 [2024-12-06 20:48:33.747738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.622 [2024-12-06 20:48:33.747746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.622 [2024-12-06 20:48:33.747754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.881 [2024-12-06 20:48:33.834406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.881 [2024-12-06 20:48:33.834471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.881 [2024-12-06 20:48:33.834487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.834496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.904529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.904594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.882 [2024-12-06 20:48:33.904607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.904616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.904707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.904717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.882 [2024-12-06 20:48:33.904726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.904735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.904768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.904784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.882 [2024-12-06 20:48:33.904793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.904801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.904939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.904951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.882 [2024-12-06 20:48:33.904960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.904968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.905004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.905015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:16.882 [2024-12-06 20:48:33.905027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.905062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.905107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.905117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.882 [2024-12-06 20:48:33.905125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.905134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.905182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.882 [2024-12-06 20:48:33.905196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.882 [2024-12-06 20:48:33.905204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.882 [2024-12-06 20:48:33.905212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.882 [2024-12-06 20:48:33.905374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.734 ms, result 0 00:21:17.817 00:21:17.817 00:21:17.817 20:48:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:18.074 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:18.074 20:48:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:18.074 20:48:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:18.074 20:48:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:18.074 20:48:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:18.333 20:48:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:18.333 20:48:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:18.333 Process with pid 76867 is not found 00:21:18.333 20:48:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76867 00:21:18.333 20:48:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76867 ']' 00:21:18.333 20:48:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76867 00:21:18.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76867) - No such process 00:21:18.333 20:48:35 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76867 is not found' 00:21:18.333 ************************************ 00:21:18.333 END TEST ftl_trim 00:21:18.333 ************************************ 00:21:18.333 00:21:18.333 real 1m16.541s 00:21:18.333 user 1m43.450s 00:21:18.333 sys 0m4.942s 00:21:18.333 20:48:35 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.333 20:48:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:18.333 20:48:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:18.333 20:48:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:18.333 20:48:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.333 20:48:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:18.333 ************************************ 00:21:18.333 START TEST ftl_restore 00:21:18.333 ************************************ 00:21:18.333 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:18.333 * Looking for test storage... 00:21:18.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:18.333 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:18.333 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:18.333 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:18.591 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.591 20:48:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:18.591 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.591 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.591 --rc genhtml_branch_coverage=1 00:21:18.591 --rc genhtml_function_coverage=1 00:21:18.591 --rc genhtml_legend=1 00:21:18.591 --rc geninfo_all_blocks=1 00:21:18.591 --rc geninfo_unexecuted_blocks=1 00:21:18.591 00:21:18.591 ' 00:21:18.591 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.591 --rc genhtml_branch_coverage=1 00:21:18.591 --rc genhtml_function_coverage=1 00:21:18.591 --rc genhtml_legend=1 00:21:18.591 --rc geninfo_all_blocks=1 00:21:18.591 --rc geninfo_unexecuted_blocks=1 00:21:18.591 00:21:18.591 ' 00:21:18.591 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.591 --rc genhtml_branch_coverage=1 00:21:18.591 --rc genhtml_function_coverage=1 00:21:18.591 --rc genhtml_legend=1 00:21:18.591 --rc geninfo_all_blocks=1 00:21:18.591 --rc geninfo_unexecuted_blocks=1 00:21:18.591 00:21:18.591 ' 00:21:18.591 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.591 --rc genhtml_branch_coverage=1 00:21:18.591 --rc genhtml_function_coverage=1 00:21:18.591 --rc genhtml_legend=1 00:21:18.591 --rc geninfo_all_blocks=1 00:21:18.591 --rc geninfo_unexecuted_blocks=1 00:21:18.591 00:21:18.591 ' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:18.591 20:48:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.WrtVCrtodo 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77210 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.592 20:48:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77210 00:21:18.592 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77210 ']' 00:21:18.592 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.592 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.592 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.592 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.592 20:48:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:18.592 [2024-12-06 20:48:35.618504] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:21:18.592 [2024-12-06 20:48:35.618769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77210 ] 00:21:18.851 [2024-12-06 20:48:35.775351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.851 [2024-12-06 20:48:35.872458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.422 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.422 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:19.422 20:48:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:19.422 20:48:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:19.422 20:48:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:19.422 20:48:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:19.422 20:48:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:19.422 20:48:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:19.683 20:48:36 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:19.683 20:48:36 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:19.683 20:48:36 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:19.683 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:19.683 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:19.683 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:19.683 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:19.683 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:19.944 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:19.944 { 00:21:19.944 "name": "nvme0n1", 00:21:19.944 "aliases": [ 00:21:19.944 "a7ca1989-43d8-491c-b0bb-78b2ad78f3ff" 00:21:19.944 ], 00:21:19.944 "product_name": "NVMe disk", 00:21:19.944 "block_size": 4096, 00:21:19.944 "num_blocks": 1310720, 00:21:19.944 "uuid": "a7ca1989-43d8-491c-b0bb-78b2ad78f3ff", 00:21:19.944 "numa_id": -1, 00:21:19.944 "assigned_rate_limits": { 00:21:19.944 "rw_ios_per_sec": 0, 00:21:19.944 "rw_mbytes_per_sec": 0, 00:21:19.944 "r_mbytes_per_sec": 0, 00:21:19.944 "w_mbytes_per_sec": 0 00:21:19.944 }, 00:21:19.944 "claimed": true, 00:21:19.944 "claim_type": "read_many_write_one", 00:21:19.944 "zoned": false, 00:21:19.944 "supported_io_types": { 00:21:19.944 "read": true, 00:21:19.944 "write": true, 00:21:19.945 "unmap": true, 00:21:19.945 "flush": true, 00:21:19.945 "reset": true, 00:21:19.945 "nvme_admin": true, 00:21:19.945 "nvme_io": true, 00:21:19.945 "nvme_io_md": false, 00:21:19.945 "write_zeroes": true, 00:21:19.945 "zcopy": false, 00:21:19.945 "get_zone_info": false, 00:21:19.945 "zone_management": false, 00:21:19.945 "zone_append": false, 00:21:19.945 "compare": true, 00:21:19.945 "compare_and_write": false, 00:21:19.945 "abort": true, 00:21:19.945 "seek_hole": false, 00:21:19.945 "seek_data": false, 00:21:19.945 "copy": true, 00:21:19.945 "nvme_iov_md": false 00:21:19.945 }, 00:21:19.945 "driver_specific": { 00:21:19.945 "nvme": [ 00:21:19.945 { 00:21:19.945 "pci_address": "0000:00:11.0", 00:21:19.945 "trid": { 00:21:19.945 "trtype": "PCIe", 00:21:19.945 "traddr": "0000:00:11.0" 00:21:19.945 }, 00:21:19.945 "ctrlr_data": { 00:21:19.945 "cntlid": 0, 00:21:19.945 "vendor_id": "0x1b36", 00:21:19.945 "model_number": "QEMU NVMe Ctrl", 00:21:19.945 "serial_number": "12341", 00:21:19.945 "firmware_revision": "8.0.0", 00:21:19.945 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:19.945 "oacs": { 00:21:19.945 "security": 0, 00:21:19.945 "format": 1, 00:21:19.945 "firmware": 0, 00:21:19.945 "ns_manage": 1 00:21:19.945 }, 00:21:19.945 "multi_ctrlr": false, 00:21:19.945 "ana_reporting": false 00:21:19.945 }, 00:21:19.945 "vs": { 00:21:19.945 "nvme_version": "1.4" 00:21:19.945 }, 00:21:19.945 "ns_data": { 00:21:19.945 "id": 1, 00:21:19.945 "can_share": false 00:21:19.945 } 00:21:19.945 } 00:21:19.945 ], 00:21:19.945 "mp_policy": "active_passive" 00:21:19.945 } 00:21:19.945 } 00:21:19.945 ]' 00:21:19.945 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:19.945 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:19.945 20:48:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:19.945 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:19.945 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:19.945 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:19.945 20:48:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:19.945 20:48:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:19.945 20:48:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:19.945 20:48:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:19.945 20:48:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:20.206 20:48:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a27919f1-ccb0-4459-bc69-8cd3d7e88d38 00:21:20.206 20:48:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:20.206 20:48:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a27919f1-ccb0-4459-bc69-8cd3d7e88d38 00:21:20.468 20:48:37 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:20.726 20:48:37 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=092b3725-d6f8-4350-a688-0d9329755058 00:21:20.726 20:48:37 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 092b3725-d6f8-4350-a688-0d9329755058 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:20.985 20:48:37 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:20.985 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:20.985 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:20.985 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:20.985 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:20.985 20:48:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:20.985 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:20.985 { 00:21:20.985 "name": "1969964a-ecbf-4e99-b407-8ba7c5278ced", 00:21:20.985 "aliases": [ 00:21:20.985 "lvs/nvme0n1p0" 00:21:20.985 ], 00:21:20.985 "product_name": "Logical Volume", 00:21:20.985 "block_size": 4096, 00:21:20.985 "num_blocks": 26476544, 00:21:20.985 "uuid": "1969964a-ecbf-4e99-b407-8ba7c5278ced", 00:21:20.985 "assigned_rate_limits": { 00:21:20.985 "rw_ios_per_sec": 0, 00:21:20.985 "rw_mbytes_per_sec": 0, 00:21:20.985 "r_mbytes_per_sec": 0, 00:21:20.985 "w_mbytes_per_sec": 0 00:21:20.985 }, 00:21:20.985 "claimed": false, 00:21:20.985 "zoned": false, 00:21:20.985 "supported_io_types": { 00:21:20.985 "read": true, 00:21:20.985 "write": true, 00:21:20.985 "unmap": true, 00:21:20.985 "flush": false, 00:21:20.985 "reset": true, 00:21:20.985 "nvme_admin": false, 00:21:20.985 "nvme_io": false, 00:21:20.985 "nvme_io_md": false, 00:21:20.985 "write_zeroes": true, 00:21:20.985 "zcopy": false, 00:21:20.985 "get_zone_info": false, 00:21:20.985 "zone_management": false, 00:21:20.985 "zone_append": false, 00:21:20.985 "compare": false, 00:21:20.985 "compare_and_write": false, 00:21:20.985 "abort": false, 00:21:20.985 "seek_hole": true, 00:21:20.985 "seek_data": true, 00:21:20.985 "copy": false, 00:21:20.985 "nvme_iov_md": false 00:21:20.985 }, 00:21:20.985 "driver_specific": { 00:21:20.985 "lvol": { 00:21:20.985 "lvol_store_uuid": "092b3725-d6f8-4350-a688-0d9329755058", 00:21:20.985 "base_bdev": "nvme0n1", 00:21:20.985 "thin_provision": true, 00:21:20.985 "num_allocated_clusters": 0, 00:21:20.985 "snapshot": false, 00:21:20.985 "clone": false, 00:21:20.985 "esnap_clone": false 00:21:20.985 } 00:21:20.985 } 00:21:20.985 } 00:21:20.985 ]' 00:21:20.985 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:20.985 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:20.985 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:21.244 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:21.245 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:21.245 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:21.245 20:48:38 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:21.245 20:48:38 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:21.245 20:48:38 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:21.503 20:48:38 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:21.503 20:48:38 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:21.503 20:48:38 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:21.503 { 00:21:21.503 "name": "1969964a-ecbf-4e99-b407-8ba7c5278ced", 00:21:21.503 "aliases": [ 00:21:21.503 "lvs/nvme0n1p0" 00:21:21.503 ], 00:21:21.503 "product_name": "Logical Volume", 00:21:21.503 "block_size": 4096, 00:21:21.503 "num_blocks": 26476544, 00:21:21.503 "uuid": "1969964a-ecbf-4e99-b407-8ba7c5278ced", 00:21:21.503 "assigned_rate_limits": { 00:21:21.503 "rw_ios_per_sec": 0, 00:21:21.503 "rw_mbytes_per_sec": 0, 00:21:21.503 "r_mbytes_per_sec": 0, 00:21:21.503 "w_mbytes_per_sec": 0 00:21:21.503 }, 00:21:21.503 "claimed": false, 00:21:21.503 "zoned": false, 00:21:21.503 "supported_io_types": { 00:21:21.503 "read": true, 00:21:21.503 "write": true, 00:21:21.503 "unmap": true, 00:21:21.503 "flush": false, 00:21:21.503 "reset": true, 00:21:21.503 "nvme_admin": false, 00:21:21.503 "nvme_io": false, 00:21:21.503 "nvme_io_md": false, 00:21:21.503 "write_zeroes": true, 00:21:21.503 "zcopy": false, 00:21:21.503 "get_zone_info": false, 00:21:21.503 "zone_management": false, 00:21:21.503 "zone_append": false, 00:21:21.503 "compare": false, 00:21:21.503 "compare_and_write": false, 00:21:21.503 "abort": false, 00:21:21.503 "seek_hole": true, 00:21:21.503 "seek_data": true, 00:21:21.503 "copy": false, 00:21:21.503 "nvme_iov_md": false 00:21:21.503 }, 00:21:21.503 "driver_specific": { 00:21:21.503 "lvol": { 00:21:21.503 "lvol_store_uuid": "092b3725-d6f8-4350-a688-0d9329755058", 00:21:21.503 "base_bdev": "nvme0n1", 00:21:21.503 "thin_provision": true, 00:21:21.503 "num_allocated_clusters": 0, 00:21:21.503 "snapshot": false, 00:21:21.503 "clone": false, 00:21:21.503 "esnap_clone": false 00:21:21.503 } 00:21:21.503 } 00:21:21.503 } 00:21:21.503 ]' 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:21.503 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:21.759 20:48:38 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:21.759 20:48:38 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:21.759 20:48:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:21.759 20:48:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:21.759 20:48:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1969964a-ecbf-4e99-b407-8ba7c5278ced 00:21:22.017 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:22.017 { 00:21:22.018 "name": "1969964a-ecbf-4e99-b407-8ba7c5278ced", 00:21:22.018 "aliases": [ 00:21:22.018 "lvs/nvme0n1p0" 00:21:22.018 ], 00:21:22.018 "product_name": "Logical Volume", 00:21:22.018 "block_size": 4096, 00:21:22.018 "num_blocks": 26476544, 00:21:22.018 "uuid": "1969964a-ecbf-4e99-b407-8ba7c5278ced", 00:21:22.018 "assigned_rate_limits": { 00:21:22.018 "rw_ios_per_sec": 0, 00:21:22.018 "rw_mbytes_per_sec": 0, 00:21:22.018 "r_mbytes_per_sec": 0, 00:21:22.018 "w_mbytes_per_sec": 0 00:21:22.018 }, 00:21:22.018 "claimed": false, 00:21:22.018 "zoned": false, 00:21:22.018 "supported_io_types": { 00:21:22.018 "read": true, 00:21:22.018 "write": true, 00:21:22.018 "unmap": true, 00:21:22.018 "flush": false, 00:21:22.018 "reset": true, 00:21:22.018 "nvme_admin": false, 00:21:22.018 "nvme_io": false, 00:21:22.018 "nvme_io_md": false, 00:21:22.018 "write_zeroes": true, 00:21:22.018 "zcopy": false, 00:21:22.018 "get_zone_info": false, 00:21:22.018 "zone_management": false, 00:21:22.018 "zone_append": false, 00:21:22.018 "compare": false, 00:21:22.018 "compare_and_write": false, 00:21:22.018 "abort": false, 00:21:22.018 "seek_hole": true, 00:21:22.018 "seek_data": true, 00:21:22.018 "copy": false, 00:21:22.018 "nvme_iov_md": false 00:21:22.018 }, 00:21:22.018 "driver_specific": { 00:21:22.018 "lvol": { 00:21:22.018 "lvol_store_uuid": "092b3725-d6f8-4350-a688-0d9329755058", 00:21:22.018 "base_bdev": "nvme0n1", 00:21:22.018 "thin_provision": true, 00:21:22.018 "num_allocated_clusters": 0, 00:21:22.018 "snapshot": false, 00:21:22.018 "clone": false, 00:21:22.018 "esnap_clone": false 00:21:22.018 } 00:21:22.018 } 00:21:22.018 } 00:21:22.018 ]' 00:21:22.018 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:22.018 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:22.018 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:22.018 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:22.018 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:22.018 20:48:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1969964a-ecbf-4e99-b407-8ba7c5278ced --l2p_dram_limit 10' 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:22.018 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:22.018 20:48:39 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1969964a-ecbf-4e99-b407-8ba7c5278ced --l2p_dram_limit 10 -c nvc0n1p0 00:21:22.278 [2024-12-06 20:48:39.300682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-12-06 20:48:39.300722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:22.278 [2024-12-06 20:48:39.300735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.278 [2024-12-06 20:48:39.300742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-12-06 20:48:39.300786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.300794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.279 [2024-12-06 20:48:39.300801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:22.279 [2024-12-06 20:48:39.300807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.300825] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:22.279 [2024-12-06 20:48:39.301433] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:22.279 [2024-12-06 20:48:39.301453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.301459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.279 [2024-12-06 20:48:39.301468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:21:22.279 [2024-12-06 20:48:39.301474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.301533] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d 00:21:22.279 [2024-12-06 20:48:39.302478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.302572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:22.279 [2024-12-06 20:48:39.302586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:22.279 [2024-12-06 20:48:39.302593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.307295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.307325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.279 [2024-12-06 20:48:39.307333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:21:22.279 [2024-12-06 20:48:39.307340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.307406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.307414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.279 [2024-12-06 20:48:39.307421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:22.279 [2024-12-06 20:48:39.307431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.307471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.307480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:22.279 [2024-12-06 20:48:39.307489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:22.279 [2024-12-06 20:48:39.307496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.307512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:22.279 [2024-12-06 20:48:39.310389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.310414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.279 [2024-12-06 20:48:39.310424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:21:22.279 [2024-12-06 20:48:39.310431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.310460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.310466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:22.279 [2024-12-06 20:48:39.310473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:22.279 [2024-12-06 20:48:39.310479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.310502] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:22.279 [2024-12-06 20:48:39.310613] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:22.279 [2024-12-06 20:48:39.310625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:22.279 [2024-12-06 20:48:39.310634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:22.279 [2024-12-06 20:48:39.310643] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:22.279 [2024-12-06 20:48:39.310650] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:22.279 [2024-12-06 20:48:39.310658] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:22.279 [2024-12-06 20:48:39.310663] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:22.279 [2024-12-06 20:48:39.310673] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:22.279 [2024-12-06 20:48:39.310679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:22.279 [2024-12-06 20:48:39.310687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.310697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:22.279 [2024-12-06 20:48:39.310705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:21:22.279 [2024-12-06 20:48:39.310711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.310776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-12-06 20:48:39.310782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:22.279 [2024-12-06 20:48:39.310790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:22.279 [2024-12-06 20:48:39.310795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-12-06 20:48:39.310875] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:22.279 [2024-12-06 20:48:39.310883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:22.279 [2024-12-06 20:48:39.310907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.279 [2024-12-06 20:48:39.310913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.310921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:22.279 [2024-12-06 20:48:39.310926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.310932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:22.279 [2024-12-06 20:48:39.310938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:22.279 [2024-12-06 20:48:39.310945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:22.279 [2024-12-06 20:48:39.310951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.279 [2024-12-06 20:48:39.310957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:22.279 [2024-12-06 20:48:39.310963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:22.279 [2024-12-06 20:48:39.310969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.279 [2024-12-06 20:48:39.310975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:22.279 [2024-12-06 20:48:39.310981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:22.279 [2024-12-06 20:48:39.310987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.310995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:22.279 [2024-12-06 20:48:39.311001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:22.279 [2024-12-06 20:48:39.311007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:22.279 [2024-12-06 20:48:39.311019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.279 [2024-12-06 20:48:39.311030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:22.279 [2024-12-06 20:48:39.311035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.279 [2024-12-06 20:48:39.311046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:22.279 [2024-12-06 20:48:39.311053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.279 [2024-12-06 20:48:39.311064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:22.279 [2024-12-06 20:48:39.311069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.279 [2024-12-06 20:48:39.311087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:22.279 [2024-12-06 20:48:39.311095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.279 [2024-12-06 20:48:39.311107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:22.279 [2024-12-06 20:48:39.311112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:22.279 [2024-12-06 20:48:39.311118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.279 [2024-12-06 20:48:39.311123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:22.279 [2024-12-06 20:48:39.311129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:22.279 [2024-12-06 20:48:39.311134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:22.279 [2024-12-06 20:48:39.311145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:22.279 [2024-12-06 20:48:39.311151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311156] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:22.279 [2024-12-06 20:48:39.311163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:22.279 [2024-12-06 20:48:39.311169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.279 [2024-12-06 20:48:39.311176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.279 [2024-12-06 20:48:39.311182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:22.279 [2024-12-06 20:48:39.311191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:22.279 [2024-12-06 20:48:39.311197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:22.280 [2024-12-06 20:48:39.311204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:22.280 [2024-12-06 20:48:39.311208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:22.280 [2024-12-06 20:48:39.311215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:22.280 [2024-12-06 20:48:39.311222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:22.280 [2024-12-06 20:48:39.311232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:22.280 [2024-12-06 20:48:39.311245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:22.280 [2024-12-06 20:48:39.311250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:22.280 [2024-12-06 20:48:39.311257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:22.280 [2024-12-06 20:48:39.311262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:22.280 [2024-12-06 20:48:39.311270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:22.280 [2024-12-06 20:48:39.311275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:22.280 [2024-12-06 20:48:39.311282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:22.280 [2024-12-06 20:48:39.311287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:22.280 [2024-12-06 20:48:39.311295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:22.280 [2024-12-06 20:48:39.311324] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:22.280 [2024-12-06 20:48:39.311331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:22.280 [2024-12-06 20:48:39.311344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:22.280 [2024-12-06 20:48:39.311350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:22.280 [2024-12-06 20:48:39.311357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:22.280 [2024-12-06 20:48:39.311362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.280 [2024-12-06 20:48:39.311369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:22.280 [2024-12-06 20:48:39.311375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:21:22.280 [2024-12-06 20:48:39.311381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-12-06 20:48:39.311420] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:22.280 [2024-12-06 20:48:39.311431] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:24.810 [2024-12-06 20:48:41.783460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.783534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:24.810 [2024-12-06 20:48:41.783550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2472.031 ms 00:21:24.810 [2024-12-06 20:48:41.783560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.808675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.808721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.810 [2024-12-06 20:48:41.808733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.913 ms 00:21:24.810 [2024-12-06 20:48:41.808743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.808863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.808875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.810 [2024-12-06 20:48:41.808884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:24.810 [2024-12-06 20:48:41.808914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.839055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.839216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.810 [2024-12-06 20:48:41.839233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.109 ms 00:21:24.810 [2024-12-06 20:48:41.839242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.839273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.839287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.810 [2024-12-06 20:48:41.839295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.810 [2024-12-06 20:48:41.839309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.839652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.839670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.810 [2024-12-06 20:48:41.839679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:21:24.810 [2024-12-06 20:48:41.839688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.839786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.839796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.810 [2024-12-06 20:48:41.839807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:24.810 [2024-12-06 20:48:41.839818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.853669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.853705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.810 [2024-12-06 20:48:41.853715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.834 ms 00:21:24.810 [2024-12-06 20:48:41.853724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.810 [2024-12-06 20:48:41.878812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.810 [2024-12-06 20:48:41.881444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.810 [2024-12-06 20:48:41.881475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.810 [2024-12-06 20:48:41.881489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.652 ms 00:21:24.810 [2024-12-06 20:48:41.881498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.069 [2024-12-06 20:48:41.943260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.069 [2024-12-06 20:48:41.943306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:25.069 [2024-12-06 20:48:41.943321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.727 ms 00:21:25.069 [2024-12-06 20:48:41.943329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.069 [2024-12-06 20:48:41.943506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.069 [2024-12-06 20:48:41.943519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:25.069 [2024-12-06 20:48:41.943533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:21:25.069 [2024-12-06 20:48:41.943540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.069 [2024-12-06 20:48:41.966140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.069 [2024-12-06 20:48:41.966284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:25.069 [2024-12-06 20:48:41.966306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.541 ms 00:21:25.069 [2024-12-06 20:48:41.966314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.069 [2024-12-06 20:48:41.988460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:41.988503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:25.070 [2024-12-06 20:48:41.988517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.106 ms 00:21:25.070 [2024-12-06 20:48:41.988524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:41.989091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:41.989111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:25.070 [2024-12-06 20:48:41.989122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:21:25.070 [2024-12-06 20:48:41.989131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.065248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:42.065402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:25.070 [2024-12-06 20:48:42.065428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.081 ms 00:21:25.070 [2024-12-06 20:48:42.065437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.089147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:42.089181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:25.070 [2024-12-06 20:48:42.089194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.635 ms 00:21:25.070 [2024-12-06 20:48:42.089203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.111992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:42.112111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:25.070 [2024-12-06 20:48:42.112129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.750 ms 00:21:25.070 [2024-12-06 20:48:42.112137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.134958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:42.135073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:25.070 [2024-12-06 20:48:42.135091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.787 ms 00:21:25.070 [2024-12-06 20:48:42.135099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.135137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:42.135147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:25.070 [2024-12-06 20:48:42.135159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:25.070 [2024-12-06 20:48:42.135166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.135240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.070 [2024-12-06 20:48:42.135252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:25.070 [2024-12-06 20:48:42.135261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:25.070 [2024-12-06 20:48:42.135269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.070 [2024-12-06 20:48:42.136453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2835.324 ms, result 0 00:21:25.070 { 00:21:25.070 "name": "ftl0", 00:21:25.070 "uuid": "5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d" 00:21:25.070 } 00:21:25.070 20:48:42 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:25.070 20:48:42 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:25.329 20:48:42 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:25.329 20:48:42 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:25.589 [2024-12-06 20:48:42.543688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.543740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:25.589 [2024-12-06 20:48:42.543753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:25.589 [2024-12-06 20:48:42.543763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.543785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:25.589 [2024-12-06 20:48:42.546399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.546530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:25.589 [2024-12-06 20:48:42.546552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:21:25.589 [2024-12-06 20:48:42.546560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.546829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.546846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:25.589 [2024-12-06 20:48:42.546857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:21:25.589 [2024-12-06 20:48:42.546864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.550111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.550132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:25.589 [2024-12-06 20:48:42.550143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:21:25.589 [2024-12-06 20:48:42.550152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.556360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.556384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:25.589 [2024-12-06 20:48:42.556397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:21:25.589 [2024-12-06 20:48:42.556404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.579496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.579528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:25.589 [2024-12-06 20:48:42.579541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.020 ms 00:21:25.589 [2024-12-06 20:48:42.579549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.593364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.593493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:25.589 [2024-12-06 20:48:42.593514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.775 ms 00:21:25.589 [2024-12-06 20:48:42.593522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.593672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.593683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:25.589 [2024-12-06 20:48:42.593694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:25.589 [2024-12-06 20:48:42.593701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.615960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.616084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:25.589 [2024-12-06 20:48:42.616103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.239 ms 00:21:25.589 [2024-12-06 20:48:42.616110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.589 [2024-12-06 20:48:42.638517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.589 [2024-12-06 20:48:42.638546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:25.589 [2024-12-06 20:48:42.638558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:21:25.590 [2024-12-06 20:48:42.638565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.590 [2024-12-06 20:48:42.660960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.590 [2024-12-06 20:48:42.660990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:25.590 [2024-12-06 20:48:42.661002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.356 ms 00:21:25.590 [2024-12-06 20:48:42.661009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.590 [2024-12-06 20:48:42.683451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.590 [2024-12-06 20:48:42.683567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:25.590 [2024-12-06 20:48:42.683585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.370 ms 00:21:25.590 [2024-12-06 20:48:42.683592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.590 [2024-12-06 20:48:42.683623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:25.590 [2024-12-06 20:48:42.683636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.683996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:25.590 [2024-12-06 20:48:42.684346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:25.591 [2024-12-06 20:48:42.684519] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:25.591 [2024-12-06 20:48:42.684528] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d 00:21:25.591 [2024-12-06 20:48:42.684536] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:25.591 [2024-12-06 20:48:42.684546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:25.591 [2024-12-06 20:48:42.684555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:25.591 [2024-12-06 20:48:42.684564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:25.591 [2024-12-06 20:48:42.684570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:25.591 [2024-12-06 20:48:42.684580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:25.591 [2024-12-06 20:48:42.684587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:25.591 [2024-12-06 20:48:42.684594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:25.591 [2024-12-06 20:48:42.684600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:25.591 [2024-12-06 20:48:42.684608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.591 [2024-12-06 20:48:42.684616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:25.591 [2024-12-06 20:48:42.684625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:21:25.591 [2024-12-06 20:48:42.684634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.591 [2024-12-06 20:48:42.697565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.591 [2024-12-06 20:48:42.697669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:25.591 [2024-12-06 20:48:42.697781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 00:21:25.591 [2024-12-06 20:48:42.697886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.591 [2024-12-06 20:48:42.698282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.591 [2024-12-06 20:48:42.698362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:25.591 [2024-12-06 20:48:42.698419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:21:25.591 [2024-12-06 20:48:42.698442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.739971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.740075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:25.851 [2024-12-06 20:48:42.740130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.740232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.740315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.740364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:25.851 [2024-12-06 20:48:42.740392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.740411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.740497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.740570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.851 [2024-12-06 20:48:42.740596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.740615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.740648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.740668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.851 [2024-12-06 20:48:42.740718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.740742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.817365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.817488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.851 [2024-12-06 20:48:42.817543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.817564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.880242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.880374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.851 [2024-12-06 20:48:42.880425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.880450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.880548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.880572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.851 [2024-12-06 20:48:42.880594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.880613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.880673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.880800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.851 [2024-12-06 20:48:42.880821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.880840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.880959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.881027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.851 [2024-12-06 20:48:42.881048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.881067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.881179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.881204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:25.851 [2024-12-06 20:48:42.881227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.881285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.881340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.881362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.851 [2024-12-06 20:48:42.881383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.881446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.881502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.851 [2024-12-06 20:48:42.881525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.851 [2024-12-06 20:48:42.881577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.851 [2024-12-06 20:48:42.881599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.851 [2024-12-06 20:48:42.881770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.050 ms, result 0 00:21:25.851 true 00:21:25.851 20:48:42 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77210 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77210 ']' 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77210 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77210 00:21:25.851 killing process with pid 77210 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77210' 00:21:25.851 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77210 00:21:25.852 20:48:42 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77210 00:21:32.519 20:48:49 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:35.842 262144+0 records in 00:21:35.842 262144+0 records out 00:21:35.842 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.61142 s, 297 MB/s 00:21:35.842 20:48:52 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:38.391 20:48:54 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:38.391 [2024-12-06 20:48:54.959670] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:21:38.391 [2024-12-06 20:48:54.959959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77425 ] 00:21:38.391 [2024-12-06 20:48:55.121610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.391 [2024-12-06 20:48:55.239383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.652 [2024-12-06 20:48:55.535783] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.652 [2024-12-06 20:48:55.535866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.652 [2024-12-06 20:48:55.697382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.697445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.652 [2024-12-06 20:48:55.697459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.652 [2024-12-06 20:48:55.697469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.697525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.697539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.652 [2024-12-06 20:48:55.697548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:38.652 [2024-12-06 20:48:55.697556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.697577] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.652 [2024-12-06 20:48:55.698348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.652 [2024-12-06 20:48:55.698367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.698376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.652 [2024-12-06 20:48:55.698386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:21:38.652 [2024-12-06 20:48:55.698393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.700083] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:38.652 [2024-12-06 20:48:55.714487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.714535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:38.652 [2024-12-06 20:48:55.714550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.406 ms 00:21:38.652 [2024-12-06 20:48:55.714559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.714642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.714653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:38.652 [2024-12-06 20:48:55.714662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:38.652 [2024-12-06 20:48:55.714670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.722685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.722907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.652 [2024-12-06 20:48:55.722927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.935 ms 00:21:38.652 [2024-12-06 20:48:55.722944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.723024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.723034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.652 [2024-12-06 20:48:55.723043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:38.652 [2024-12-06 20:48:55.723051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.723096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.723107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.652 [2024-12-06 20:48:55.723116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:38.652 [2024-12-06 20:48:55.723124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.723149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.652 [2024-12-06 20:48:55.727194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.727232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.652 [2024-12-06 20:48:55.727246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.050 ms 00:21:38.652 [2024-12-06 20:48:55.727254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.727294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.727303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.652 [2024-12-06 20:48:55.727312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:38.652 [2024-12-06 20:48:55.727320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.727373] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:38.652 [2024-12-06 20:48:55.727398] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:38.652 [2024-12-06 20:48:55.727435] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:38.652 [2024-12-06 20:48:55.727454] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:38.652 [2024-12-06 20:48:55.727560] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.652 [2024-12-06 20:48:55.727571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.652 [2024-12-06 20:48:55.727581] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.652 [2024-12-06 20:48:55.727591] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.652 [2024-12-06 20:48:55.727602] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.652 [2024-12-06 20:48:55.727610] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:38.652 [2024-12-06 20:48:55.727619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.652 [2024-12-06 20:48:55.727629] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.652 [2024-12-06 20:48:55.727637] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.652 [2024-12-06 20:48:55.727646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.727654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.652 [2024-12-06 20:48:55.727662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:21:38.652 [2024-12-06 20:48:55.727669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.727752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.652 [2024-12-06 20:48:55.727761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.652 [2024-12-06 20:48:55.727768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:38.652 [2024-12-06 20:48:55.727776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.652 [2024-12-06 20:48:55.727881] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.652 [2024-12-06 20:48:55.727916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.652 [2024-12-06 20:48:55.727926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.652 [2024-12-06 20:48:55.727935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.652 [2024-12-06 20:48:55.727943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.652 [2024-12-06 20:48:55.727950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.652 [2024-12-06 20:48:55.727957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:38.652 [2024-12-06 20:48:55.727964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.652 [2024-12-06 20:48:55.727972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:38.652 [2024-12-06 20:48:55.727980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.652 [2024-12-06 20:48:55.727987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.652 [2024-12-06 20:48:55.727994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:38.652 [2024-12-06 20:48:55.728001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.652 [2024-12-06 20:48:55.728015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.652 [2024-12-06 20:48:55.728022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:38.652 [2024-12-06 20:48:55.728031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.652 [2024-12-06 20:48:55.728039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.652 [2024-12-06 20:48:55.728046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:38.652 [2024-12-06 20:48:55.728053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.652 [2024-12-06 20:48:55.728060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.653 [2024-12-06 20:48:55.728068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.653 [2024-12-06 20:48:55.728082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.653 [2024-12-06 20:48:55.728088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.653 [2024-12-06 20:48:55.728101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.653 [2024-12-06 20:48:55.728108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.653 [2024-12-06 20:48:55.728121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.653 [2024-12-06 20:48:55.728128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.653 [2024-12-06 20:48:55.728143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.653 [2024-12-06 20:48:55.728150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.653 [2024-12-06 20:48:55.728163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.653 [2024-12-06 20:48:55.728169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:38.653 [2024-12-06 20:48:55.728176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.653 [2024-12-06 20:48:55.728195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.653 [2024-12-06 20:48:55.728202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:38.653 [2024-12-06 20:48:55.728209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.653 [2024-12-06 20:48:55.728222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:38.653 [2024-12-06 20:48:55.728229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728236] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.653 [2024-12-06 20:48:55.728246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.653 [2024-12-06 20:48:55.728255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.653 [2024-12-06 20:48:55.728262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.653 [2024-12-06 20:48:55.728271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.653 [2024-12-06 20:48:55.728278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.653 [2024-12-06 20:48:55.728285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.653 [2024-12-06 20:48:55.728301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.653 [2024-12-06 20:48:55.728308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.653 [2024-12-06 20:48:55.728315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.653 [2024-12-06 20:48:55.728323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.653 [2024-12-06 20:48:55.728333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:38.653 [2024-12-06 20:48:55.728353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:38.653 [2024-12-06 20:48:55.728360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:38.653 [2024-12-06 20:48:55.728367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:38.653 [2024-12-06 20:48:55.728375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:38.653 [2024-12-06 20:48:55.728383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:38.653 [2024-12-06 20:48:55.728391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:38.653 [2024-12-06 20:48:55.728399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:38.653 [2024-12-06 20:48:55.728406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:38.653 [2024-12-06 20:48:55.728413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:38.653 [2024-12-06 20:48:55.728450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.653 [2024-12-06 20:48:55.728458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.653 [2024-12-06 20:48:55.728473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.653 [2024-12-06 20:48:55.728481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.653 [2024-12-06 20:48:55.728488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.653 [2024-12-06 20:48:55.728495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.653 [2024-12-06 20:48:55.728504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.653 [2024-12-06 20:48:55.728512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:21:38.653 [2024-12-06 20:48:55.728519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.653 [2024-12-06 20:48:55.760296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.653 [2024-12-06 20:48:55.760345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.653 [2024-12-06 20:48:55.760356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.729 ms 00:21:38.653 [2024-12-06 20:48:55.760369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.653 [2024-12-06 20:48:55.760461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.653 [2024-12-06 20:48:55.760470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:38.653 [2024-12-06 20:48:55.760479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:38.653 [2024-12-06 20:48:55.760487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.808556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.808611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.913 [2024-12-06 20:48:55.808625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.005 ms 00:21:38.913 [2024-12-06 20:48:55.808635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.808687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.808698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.913 [2024-12-06 20:48:55.808712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:38.913 [2024-12-06 20:48:55.808721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.809354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.809390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.913 [2024-12-06 20:48:55.809401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:21:38.913 [2024-12-06 20:48:55.809410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.809569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.809595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.913 [2024-12-06 20:48:55.809610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:38.913 [2024-12-06 20:48:55.809619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.825810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.825862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.913 [2024-12-06 20:48:55.825874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.170 ms 00:21:38.913 [2024-12-06 20:48:55.825882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.840328] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:38.913 [2024-12-06 20:48:55.840379] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:38.913 [2024-12-06 20:48:55.840392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.840402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:38.913 [2024-12-06 20:48:55.840411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.373 ms 00:21:38.913 [2024-12-06 20:48:55.840419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.866224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.866280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:38.913 [2024-12-06 20:48:55.866292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.750 ms 00:21:38.913 [2024-12-06 20:48:55.866301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.913 [2024-12-06 20:48:55.878900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.913 [2024-12-06 20:48:55.879097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:38.913 [2024-12-06 20:48:55.879119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.542 ms 00:21:38.914 [2024-12-06 20:48:55.879129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.891961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.892006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:38.914 [2024-12-06 20:48:55.892019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.716 ms 00:21:38.914 [2024-12-06 20:48:55.892026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.892713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.892748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:38.914 [2024-12-06 20:48:55.892759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:21:38.914 [2024-12-06 20:48:55.892770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.959258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.959329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.914 [2024-12-06 20:48:55.959347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.466 ms 00:21:38.914 [2024-12-06 20:48:55.959364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.970632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:38.914 [2024-12-06 20:48:55.974316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.974364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.914 [2024-12-06 20:48:55.974377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.886 ms 00:21:38.914 [2024-12-06 20:48:55.974388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.974485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.974497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.914 [2024-12-06 20:48:55.974507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:38.914 [2024-12-06 20:48:55.974516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.974591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.974603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.914 [2024-12-06 20:48:55.974613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:38.914 [2024-12-06 20:48:55.974621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.974643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.974652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.914 [2024-12-06 20:48:55.974661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.914 [2024-12-06 20:48:55.974668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:55.974703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.914 [2024-12-06 20:48:55.974716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:55.974725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.914 [2024-12-06 20:48:55.974733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:38.914 [2024-12-06 20:48:55.974741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:56.000915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:56.001101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.914 [2024-12-06 20:48:56.001123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.155 ms 00:21:38.914 [2024-12-06 20:48:56.001139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:56.001218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-06 20:48:56.001228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.914 [2024-12-06 20:48:56.001238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:38.914 [2024-12-06 20:48:56.001246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-06 20:48:56.002489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.615 ms, result 0 00:21:40.298  [2024-12-06T20:48:58.374Z] Copying: 36/1024 [MB] (36 MBps) [2024-12-06T20:48:59.349Z] Copying: 81/1024 [MB] (44 MBps) [2024-12-06T20:49:00.291Z] Copying: 130/1024 [MB] (49 MBps) [2024-12-06T20:49:01.232Z] Copying: 165/1024 [MB] (34 MBps) [2024-12-06T20:49:02.178Z] Copying: 209/1024 [MB] (44 MBps) [2024-12-06T20:49:03.123Z] Copying: 254/1024 [MB] (44 MBps) [2024-12-06T20:49:04.067Z] Copying: 297/1024 [MB] (43 MBps) [2024-12-06T20:49:05.454Z] Copying: 336/1024 [MB] (38 MBps) [2024-12-06T20:49:06.026Z] Copying: 379/1024 [MB] (43 MBps) [2024-12-06T20:49:07.414Z] Copying: 422/1024 [MB] (43 MBps) [2024-12-06T20:49:08.356Z] Copying: 466/1024 [MB] (43 MBps) [2024-12-06T20:49:09.300Z] Copying: 507/1024 [MB] (41 MBps) [2024-12-06T20:49:10.243Z] Copying: 550/1024 [MB] (43 MBps) [2024-12-06T20:49:11.187Z] Copying: 593/1024 [MB] (43 MBps) [2024-12-06T20:49:12.131Z] Copying: 637/1024 [MB] (43 MBps) [2024-12-06T20:49:13.076Z] Copying: 680/1024 [MB] (43 MBps) [2024-12-06T20:49:14.022Z] Copying: 724/1024 [MB] (43 MBps) [2024-12-06T20:49:15.484Z] Copying: 767/1024 [MB] (43 MBps) [2024-12-06T20:49:16.078Z] Copying: 812/1024 [MB] (44 MBps) [2024-12-06T20:49:17.020Z] Copying: 856/1024 [MB] (43 MBps) [2024-12-06T20:49:18.408Z] Copying: 900/1024 [MB] (44 MBps) [2024-12-06T20:49:19.350Z] Copying: 942/1024 [MB] (42 MBps) [2024-12-06T20:49:20.291Z] Copying: 985/1024 [MB] (42 MBps) [2024-12-06T20:49:20.291Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-12-06 20:49:19.940346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:19.940402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:03.158 [2024-12-06 20:49:19.940417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:03.158 [2024-12-06 20:49:19.940426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-12-06 20:49:19.940447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.158 [2024-12-06 20:49:19.943268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:19.943303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:03.158 [2024-12-06 20:49:19.943320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:22:03.158 [2024-12-06 20:49:19.943329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-12-06 20:49:19.944837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:19.944869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:03.158 [2024-12-06 20:49:19.944879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:22:03.158 [2024-12-06 20:49:19.944899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-12-06 20:49:19.958203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:19.958374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:03.158 [2024-12-06 20:49:19.958391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.287 ms 00:22:03.158 [2024-12-06 20:49:19.958400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-12-06 20:49:19.964569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:19.964598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:03.158 [2024-12-06 20:49:19.964608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:22:03.158 [2024-12-06 20:49:19.964616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-12-06 20:49:19.988709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:19.988744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:03.158 [2024-12-06 20:49:19.988757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.044 ms 00:22:03.158 [2024-12-06 20:49:19.988765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-12-06 20:49:20.003226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-12-06 20:49:20.003258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:03.158 [2024-12-06 20:49:20.003270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:22:03.159 [2024-12-06 20:49:20.003279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.159 [2024-12-06 20:49:20.003413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.159 [2024-12-06 20:49:20.003427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:03.159 [2024-12-06 20:49:20.003436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:03.159 [2024-12-06 20:49:20.003443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.159 [2024-12-06 20:49:20.026299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.159 [2024-12-06 20:49:20.026330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:03.159 [2024-12-06 20:49:20.026341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.841 ms 00:22:03.159 [2024-12-06 20:49:20.026349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.159 [2024-12-06 20:49:20.048926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.159 [2024-12-06 20:49:20.049097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:03.159 [2024-12-06 20:49:20.049113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.544 ms 00:22:03.159 [2024-12-06 20:49:20.049121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.159 [2024-12-06 20:49:20.071523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.159 [2024-12-06 20:49:20.071558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:03.159 [2024-12-06 20:49:20.071570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.314 ms 00:22:03.159 [2024-12-06 20:49:20.071577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.159 [2024-12-06 20:49:20.093570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.159 [2024-12-06 20:49:20.093601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.159 [2024-12-06 20:49:20.093612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.937 ms 00:22:03.159 [2024-12-06 20:49:20.093619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.159 [2024-12-06 20:49:20.093650] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.159 [2024-12-06 20:49:20.093664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.093997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.159 [2024-12-06 20:49:20.094200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.160 [2024-12-06 20:49:20.094455] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.160 [2024-12-06 20:49:20.094466] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d 00:22:03.160 [2024-12-06 20:49:20.094474] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.160 [2024-12-06 20:49:20.094481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.160 [2024-12-06 20:49:20.094488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.160 [2024-12-06 20:49:20.094495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.160 [2024-12-06 20:49:20.094502] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.160 [2024-12-06 20:49:20.094516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.160 [2024-12-06 20:49:20.094523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.160 [2024-12-06 20:49:20.094529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.160 [2024-12-06 20:49:20.094535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.160 [2024-12-06 20:49:20.094542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.160 [2024-12-06 20:49:20.094549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.160 [2024-12-06 20:49:20.094558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:22:03.160 [2024-12-06 20:49:20.094565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.107202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.160 [2024-12-06 20:49:20.107233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.160 [2024-12-06 20:49:20.107243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.619 ms 00:22:03.160 [2024-12-06 20:49:20.107251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.107606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.160 [2024-12-06 20:49:20.107622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.160 [2024-12-06 20:49:20.107631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:22:03.160 [2024-12-06 20:49:20.107641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.142684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.160 [2024-12-06 20:49:20.142718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.160 [2024-12-06 20:49:20.142728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.160 [2024-12-06 20:49:20.142737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.142795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.160 [2024-12-06 20:49:20.142803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.160 [2024-12-06 20:49:20.142811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.160 [2024-12-06 20:49:20.142824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.142916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.160 [2024-12-06 20:49:20.142928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.160 [2024-12-06 20:49:20.142936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.160 [2024-12-06 20:49:20.142944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.142960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.160 [2024-12-06 20:49:20.142969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.160 [2024-12-06 20:49:20.142977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.160 [2024-12-06 20:49:20.142985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.160 [2024-12-06 20:49:20.224727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.160 [2024-12-06 20:49:20.224929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.160 [2024-12-06 20:49:20.224947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.160 [2024-12-06 20:49:20.224955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.420 [2024-12-06 20:49:20.291959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.420 [2024-12-06 20:49:20.292128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.420 [2024-12-06 20:49:20.292147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.420 [2024-12-06 20:49:20.292160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.420 [2024-12-06 20:49:20.292258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.421 [2024-12-06 20:49:20.292274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.421 [2024-12-06 20:49:20.292286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.421 [2024-12-06 20:49:20.292299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.421 [2024-12-06 20:49:20.292370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.421 [2024-12-06 20:49:20.292380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.421 [2024-12-06 20:49:20.292388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.421 [2024-12-06 20:49:20.292396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.421 [2024-12-06 20:49:20.292495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.421 [2024-12-06 20:49:20.292506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.421 [2024-12-06 20:49:20.292515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.421 [2024-12-06 20:49:20.292522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.421 [2024-12-06 20:49:20.292554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.421 [2024-12-06 20:49:20.292563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:03.421 [2024-12-06 20:49:20.292572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.421 [2024-12-06 20:49:20.292579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.421 [2024-12-06 20:49:20.292618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.421 [2024-12-06 20:49:20.292629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.421 [2024-12-06 20:49:20.292637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.421 [2024-12-06 20:49:20.292645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.421 [2024-12-06 20:49:20.292689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.421 [2024-12-06 20:49:20.292698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.421 [2024-12-06 20:49:20.292707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.421 [2024-12-06 20:49:20.292715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.421 [2024-12-06 20:49:20.292836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.455 ms, result 0 00:22:04.805 00:22:04.805 00:22:05.066 20:49:21 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:05.066 [2024-12-06 20:49:22.002421] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:22:05.066 [2024-12-06 20:49:22.002688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77708 ] 00:22:05.066 [2024-12-06 20:49:22.160506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.326 [2024-12-06 20:49:22.265797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.586 [2024-12-06 20:49:22.542075] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.586 [2024-12-06 20:49:22.542143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.586 [2024-12-06 20:49:22.696077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.696267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:05.586 [2024-12-06 20:49:22.696289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.586 [2024-12-06 20:49:22.696298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.696350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.696362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.586 [2024-12-06 20:49:22.696371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:05.586 [2024-12-06 20:49:22.696379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.696401] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:05.586 [2024-12-06 20:49:22.697066] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:05.586 [2024-12-06 20:49:22.697089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.697097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.586 [2024-12-06 20:49:22.697106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:22:05.586 [2024-12-06 20:49:22.697114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.698397] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:05.586 [2024-12-06 20:49:22.711046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.711180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:05.586 [2024-12-06 20:49:22.711198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:22:05.586 [2024-12-06 20:49:22.711208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.711261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.711270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:05.586 [2024-12-06 20:49:22.711279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:05.586 [2024-12-06 20:49:22.711286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.717744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.717773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.586 [2024-12-06 20:49:22.717783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.399 ms 00:22:05.586 [2024-12-06 20:49:22.717795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.717870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.717880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.586 [2024-12-06 20:49:22.717903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:05.586 [2024-12-06 20:49:22.717911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.586 [2024-12-06 20:49:22.717953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.586 [2024-12-06 20:49:22.717963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:05.847 [2024-12-06 20:49:22.717971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:05.847 [2024-12-06 20:49:22.717980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.847 [2024-12-06 20:49:22.718005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:05.847 [2024-12-06 20:49:22.721636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.847 [2024-12-06 20:49:22.721662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.847 [2024-12-06 20:49:22.721675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.636 ms 00:22:05.847 [2024-12-06 20:49:22.721683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.847 [2024-12-06 20:49:22.721714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.847 [2024-12-06 20:49:22.721724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:05.848 [2024-12-06 20:49:22.721733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:05.848 [2024-12-06 20:49:22.721740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.848 [2024-12-06 20:49:22.721767] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:05.848 [2024-12-06 20:49:22.721789] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:05.848 [2024-12-06 20:49:22.721825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:05.848 [2024-12-06 20:49:22.721844] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:05.848 [2024-12-06 20:49:22.721971] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:05.848 [2024-12-06 20:49:22.721984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:05.848 [2024-12-06 20:49:22.721995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:05.848 [2024-12-06 20:49:22.722005] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722014] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:05.848 [2024-12-06 20:49:22.722031] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:05.848 [2024-12-06 20:49:22.722041] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:05.848 [2024-12-06 20:49:22.722049] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:05.848 [2024-12-06 20:49:22.722057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.848 [2024-12-06 20:49:22.722065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:05.848 [2024-12-06 20:49:22.722073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:22:05.848 [2024-12-06 20:49:22.722080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.848 [2024-12-06 20:49:22.722162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.848 [2024-12-06 20:49:22.722172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:05.848 [2024-12-06 20:49:22.722180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:05.848 [2024-12-06 20:49:22.722187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.848 [2024-12-06 20:49:22.722300] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:05.848 [2024-12-06 20:49:22.722312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:05.848 [2024-12-06 20:49:22.722320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:05.848 [2024-12-06 20:49:22.722344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:05.848 [2024-12-06 20:49:22.722367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.848 [2024-12-06 20:49:22.722380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:05.848 [2024-12-06 20:49:22.722387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:05.848 [2024-12-06 20:49:22.722395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.848 [2024-12-06 20:49:22.722408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:05.848 [2024-12-06 20:49:22.722415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:05.848 [2024-12-06 20:49:22.722422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:05.848 [2024-12-06 20:49:22.722435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:05.848 [2024-12-06 20:49:22.722457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:05.848 [2024-12-06 20:49:22.722477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:05.848 [2024-12-06 20:49:22.722497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:05.848 [2024-12-06 20:49:22.722517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:05.848 [2024-12-06 20:49:22.722536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.848 [2024-12-06 20:49:22.722549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:05.848 [2024-12-06 20:49:22.722555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:05.848 [2024-12-06 20:49:22.722562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.848 [2024-12-06 20:49:22.722569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:05.848 [2024-12-06 20:49:22.722575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:05.848 [2024-12-06 20:49:22.722582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:05.848 [2024-12-06 20:49:22.722595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:05.848 [2024-12-06 20:49:22.722602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722608] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:05.848 [2024-12-06 20:49:22.722615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:05.848 [2024-12-06 20:49:22.722623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.848 [2024-12-06 20:49:22.722637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:05.848 [2024-12-06 20:49:22.722644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:05.848 [2024-12-06 20:49:22.722650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:05.848 [2024-12-06 20:49:22.722658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:05.848 [2024-12-06 20:49:22.722666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:05.848 [2024-12-06 20:49:22.722673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:05.848 [2024-12-06 20:49:22.722681] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:05.848 [2024-12-06 20:49:22.722690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:05.849 [2024-12-06 20:49:22.722708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:05.849 [2024-12-06 20:49:22.722715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:05.849 [2024-12-06 20:49:22.722722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:05.849 [2024-12-06 20:49:22.722730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:05.849 [2024-12-06 20:49:22.722737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:05.849 [2024-12-06 20:49:22.722744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:05.849 [2024-12-06 20:49:22.722750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:05.849 [2024-12-06 20:49:22.722757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:05.849 [2024-12-06 20:49:22.722765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:05.849 [2024-12-06 20:49:22.722802] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:05.849 [2024-12-06 20:49:22.722810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:05.849 [2024-12-06 20:49:22.722825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:05.849 [2024-12-06 20:49:22.722832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:05.849 [2024-12-06 20:49:22.722839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:05.849 [2024-12-06 20:49:22.722846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.722853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:05.849 [2024-12-06 20:49:22.722861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:22:05.849 [2024-12-06 20:49:22.722867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.751465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.751498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.849 [2024-12-06 20:49:22.751508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.534 ms 00:22:05.849 [2024-12-06 20:49:22.751519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.751602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.751610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:05.849 [2024-12-06 20:49:22.751618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:05.849 [2024-12-06 20:49:22.751626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.796399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.796437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.849 [2024-12-06 20:49:22.796450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.723 ms 00:22:05.849 [2024-12-06 20:49:22.796459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.796498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.796508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.849 [2024-12-06 20:49:22.796519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:05.849 [2024-12-06 20:49:22.796527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.797010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.797038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.849 [2024-12-06 20:49:22.797048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:22:05.849 [2024-12-06 20:49:22.797056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.797186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.797197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.849 [2024-12-06 20:49:22.797211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:05.849 [2024-12-06 20:49:22.797218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.811258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.811288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.849 [2024-12-06 20:49:22.811298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.019 ms 00:22:05.849 [2024-12-06 20:49:22.811307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.824237] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:05.849 [2024-12-06 20:49:22.824270] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:05.849 [2024-12-06 20:49:22.824282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.824291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:05.849 [2024-12-06 20:49:22.824299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.884 ms 00:22:05.849 [2024-12-06 20:49:22.824307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.848578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.848611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:05.849 [2024-12-06 20:49:22.848623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.234 ms 00:22:05.849 [2024-12-06 20:49:22.848632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.859668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.859698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:05.849 [2024-12-06 20:49:22.859707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.992 ms 00:22:05.849 [2024-12-06 20:49:22.859714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.870935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.870964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:05.849 [2024-12-06 20:49:22.870974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.190 ms 00:22:05.849 [2024-12-06 20:49:22.870982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.849 [2024-12-06 20:49:22.871573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.849 [2024-12-06 20:49:22.871592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:05.849 [2024-12-06 20:49:22.871604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:22:05.849 [2024-12-06 20:49:22.871612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.930371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.930556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:05.850 [2024-12-06 20:49:22.930581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.742 ms 00:22:05.850 [2024-12-06 20:49:22.930590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.941584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:05.850 [2024-12-06 20:49:22.944306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.944335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:05.850 [2024-12-06 20:49:22.944346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.680 ms 00:22:05.850 [2024-12-06 20:49:22.944354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.944442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.944454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:05.850 [2024-12-06 20:49:22.944467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:05.850 [2024-12-06 20:49:22.944474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.944543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.944554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:05.850 [2024-12-06 20:49:22.944563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:05.850 [2024-12-06 20:49:22.944570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.944589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.944598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:05.850 [2024-12-06 20:49:22.944606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.850 [2024-12-06 20:49:22.944614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.944648] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:05.850 [2024-12-06 20:49:22.944659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.944666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:05.850 [2024-12-06 20:49:22.944674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:05.850 [2024-12-06 20:49:22.944681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.967727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.967834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:05.850 [2024-12-06 20:49:22.967906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:22:05.850 [2024-12-06 20:49:22.967932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.968056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.850 [2024-12-06 20:49:22.968084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:05.850 [2024-12-06 20:49:22.968128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:05.850 [2024-12-06 20:49:22.968151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.850 [2024-12-06 20:49:22.969222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.696 ms, result 0 00:22:07.234  [2024-12-06T20:49:25.306Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T20:49:26.248Z] Copying: 36/1024 [MB] (16 MBps) [2024-12-06T20:49:27.189Z] Copying: 58/1024 [MB] (22 MBps) [2024-12-06T20:49:28.577Z] Copying: 74/1024 [MB] (15 MBps) [2024-12-06T20:49:29.149Z] Copying: 95/1024 [MB] (20 MBps) [2024-12-06T20:49:30.523Z] Copying: 111/1024 [MB] (16 MBps) [2024-12-06T20:49:31.457Z] Copying: 134/1024 [MB] (23 MBps) [2024-12-06T20:49:32.392Z] Copying: 155/1024 [MB] (20 MBps) [2024-12-06T20:49:33.326Z] Copying: 175/1024 [MB] (20 MBps) [2024-12-06T20:49:34.259Z] Copying: 192/1024 [MB] (16 MBps) [2024-12-06T20:49:35.204Z] Copying: 216/1024 [MB] (24 MBps) [2024-12-06T20:49:36.576Z] Copying: 234/1024 [MB] (18 MBps) [2024-12-06T20:49:37.510Z] Copying: 263/1024 [MB] (28 MBps) [2024-12-06T20:49:38.442Z] Copying: 284/1024 [MB] (21 MBps) [2024-12-06T20:49:39.376Z] Copying: 301/1024 [MB] (17 MBps) [2024-12-06T20:49:40.312Z] Copying: 322/1024 [MB] (20 MBps) [2024-12-06T20:49:41.250Z] Copying: 342/1024 [MB] (19 MBps) [2024-12-06T20:49:42.187Z] Copying: 353/1024 [MB] (11 MBps) [2024-12-06T20:49:43.181Z] Copying: 364/1024 [MB] (11 MBps) [2024-12-06T20:49:44.558Z] Copying: 376/1024 [MB] (11 MBps) [2024-12-06T20:49:45.494Z] Copying: 387/1024 [MB] (11 MBps) [2024-12-06T20:49:46.427Z] Copying: 398/1024 [MB] (10 MBps) [2024-12-06T20:49:47.366Z] Copying: 409/1024 [MB] (11 MBps) [2024-12-06T20:49:48.318Z] Copying: 420/1024 [MB] (11 MBps) [2024-12-06T20:49:49.259Z] Copying: 431/1024 [MB] (10 MBps) [2024-12-06T20:49:50.199Z] Copying: 442/1024 [MB] (10 MBps) [2024-12-06T20:49:51.584Z] Copying: 452/1024 [MB] (10 MBps) [2024-12-06T20:49:52.155Z] Copying: 463/1024 [MB] (11 MBps) [2024-12-06T20:49:53.539Z] Copying: 481/1024 [MB] (17 MBps) [2024-12-06T20:49:54.476Z] Copying: 497/1024 [MB] (16 MBps) [2024-12-06T20:49:55.413Z] Copying: 508/1024 [MB] (11 MBps) [2024-12-06T20:49:56.356Z] Copying: 521/1024 [MB] (12 MBps) [2024-12-06T20:49:57.300Z] Copying: 538/1024 [MB] (17 MBps) [2024-12-06T20:49:58.245Z] Copying: 556/1024 [MB] (17 MBps) [2024-12-06T20:49:59.190Z] Copying: 570/1024 [MB] (13 MBps) [2024-12-06T20:50:00.609Z] Copying: 580/1024 [MB] (10 MBps) [2024-12-06T20:50:01.184Z] Copying: 594/1024 [MB] (14 MBps) [2024-12-06T20:50:02.563Z] Copying: 625/1024 [MB] (30 MBps) [2024-12-06T20:50:03.504Z] Copying: 638/1024 [MB] (13 MBps) [2024-12-06T20:50:04.443Z] Copying: 653/1024 [MB] (14 MBps) [2024-12-06T20:50:05.383Z] Copying: 665/1024 [MB] (12 MBps) [2024-12-06T20:50:06.317Z] Copying: 677/1024 [MB] (11 MBps) [2024-12-06T20:50:07.259Z] Copying: 687/1024 [MB] (10 MBps) [2024-12-06T20:50:08.203Z] Copying: 700/1024 [MB] (12 MBps) [2024-12-06T20:50:09.590Z] Copying: 711/1024 [MB] (11 MBps) [2024-12-06T20:50:10.164Z] Copying: 726/1024 [MB] (14 MBps) [2024-12-06T20:50:11.551Z] Copying: 739/1024 [MB] (12 MBps) [2024-12-06T20:50:12.492Z] Copying: 754/1024 [MB] (14 MBps) [2024-12-06T20:50:13.436Z] Copying: 775/1024 [MB] (21 MBps) [2024-12-06T20:50:14.379Z] Copying: 790/1024 [MB] (14 MBps) [2024-12-06T20:50:15.320Z] Copying: 827/1024 [MB] (37 MBps) [2024-12-06T20:50:16.261Z] Copying: 868/1024 [MB] (40 MBps) [2024-12-06T20:50:17.204Z] Copying: 914/1024 [MB] (45 MBps) [2024-12-06T20:50:18.588Z] Copying: 957/1024 [MB] (42 MBps) [2024-12-06T20:50:18.853Z] Copying: 1000/1024 [MB] (43 MBps) [2024-12-06T20:50:19.182Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-06 20:50:18.956235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:18.956356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:02.049 [2024-12-06 20:50:18.956391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:02.049 [2024-12-06 20:50:18.956414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:18.956471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.049 [2024-12-06 20:50:18.964896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:18.964938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:02.049 [2024-12-06 20:50:18.964948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.381 ms 00:23:02.049 [2024-12-06 20:50:18.964958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:18.965191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:18.965202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:02.049 [2024-12-06 20:50:18.965211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:02.049 [2024-12-06 20:50:18.965219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:18.968659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:18.968681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:02.049 [2024-12-06 20:50:18.968691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:23:02.049 [2024-12-06 20:50:18.968704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:18.975018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:18.975048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:02.049 [2024-12-06 20:50:18.975059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.296 ms 00:23:02.049 [2024-12-06 20:50:18.975067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.000019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.000056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:02.049 [2024-12-06 20:50:19.000068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.881 ms 00:23:02.049 [2024-12-06 20:50:19.000076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.014114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.014147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:02.049 [2024-12-06 20:50:19.014159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.016 ms 00:23:02.049 [2024-12-06 20:50:19.014168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.014303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.014314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:02.049 [2024-12-06 20:50:19.014324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:02.049 [2024-12-06 20:50:19.014331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.037430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.037639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:02.049 [2024-12-06 20:50:19.037656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.085 ms 00:23:02.049 [2024-12-06 20:50:19.037664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.060906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.060945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:02.049 [2024-12-06 20:50:19.060957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.220 ms 00:23:02.049 [2024-12-06 20:50:19.060964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.083324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.083357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:02.049 [2024-12-06 20:50:19.083368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.338 ms 00:23:02.049 [2024-12-06 20:50:19.083375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.106050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.049 [2024-12-06 20:50:19.106081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:02.049 [2024-12-06 20:50:19.106091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.631 ms 00:23:02.049 [2024-12-06 20:50:19.106098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.049 [2024-12-06 20:50:19.106116] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:02.049 [2024-12-06 20:50:19.106136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:02.049 [2024-12-06 20:50:19.106330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:02.050 [2024-12-06 20:50:19.106941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:02.050 [2024-12-06 20:50:19.106949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d 00:23:02.050 [2024-12-06 20:50:19.106957] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:02.050 [2024-12-06 20:50:19.106964] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:02.050 [2024-12-06 20:50:19.106972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:02.050 [2024-12-06 20:50:19.106980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:02.050 [2024-12-06 20:50:19.106994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:02.050 [2024-12-06 20:50:19.107002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:02.050 [2024-12-06 20:50:19.107009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:02.050 [2024-12-06 20:50:19.107016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:02.050 [2024-12-06 20:50:19.107023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:02.050 [2024-12-06 20:50:19.107030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.050 [2024-12-06 20:50:19.107038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:02.050 [2024-12-06 20:50:19.107047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:23:02.050 [2024-12-06 20:50:19.107058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.050 [2024-12-06 20:50:19.120375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.050 [2024-12-06 20:50:19.120411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:02.050 [2024-12-06 20:50:19.120422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.300 ms 00:23:02.051 [2024-12-06 20:50:19.120430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.051 [2024-12-06 20:50:19.120800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.051 [2024-12-06 20:50:19.120811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:02.051 [2024-12-06 20:50:19.120824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:23:02.051 [2024-12-06 20:50:19.120832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.338 [2024-12-06 20:50:19.156212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.338 [2024-12-06 20:50:19.156248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.338 [2024-12-06 20:50:19.156258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.338 [2024-12-06 20:50:19.156267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.338 [2024-12-06 20:50:19.156330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.338 [2024-12-06 20:50:19.156339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.338 [2024-12-06 20:50:19.156351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.338 [2024-12-06 20:50:19.156360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.156418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.156428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.339 [2024-12-06 20:50:19.156437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.156444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.156459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.156467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.339 [2024-12-06 20:50:19.156476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.156487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.237842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.237886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.339 [2024-12-06 20:50:19.237918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.237926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.339 [2024-12-06 20:50:19.303378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.339 [2024-12-06 20:50:19.303487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.339 [2024-12-06 20:50:19.303547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.339 [2024-12-06 20:50:19.303663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.339 [2024-12-06 20:50:19.303720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.339 [2024-12-06 20:50:19.303790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.303843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.339 [2024-12-06 20:50:19.303854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.339 [2024-12-06 20:50:19.303863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.339 [2024-12-06 20:50:19.303873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.339 [2024-12-06 20:50:19.304028] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.795 ms, result 0 00:23:03.275 00:23:03.275 00:23:03.275 20:50:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.183 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:05.183 20:50:22 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:05.183 [2024-12-06 20:50:22.310375] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:23:05.183 [2024-12-06 20:50:22.310505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78330 ] 00:23:05.443 [2024-12-06 20:50:22.471629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.703 [2024-12-06 20:50:22.579433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.964 [2024-12-06 20:50:22.857629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:05.964 [2024-12-06 20:50:22.857699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:05.964 [2024-12-06 20:50:23.012309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.012359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:05.964 [2024-12-06 20:50:23.012373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:05.964 [2024-12-06 20:50:23.012381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.012427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.012439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.964 [2024-12-06 20:50:23.012447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:05.964 [2024-12-06 20:50:23.012454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.012473] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:05.964 [2024-12-06 20:50:23.013377] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:05.964 [2024-12-06 20:50:23.013478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.013489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.964 [2024-12-06 20:50:23.013498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:23:05.964 [2024-12-06 20:50:23.013507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.015271] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:05.964 [2024-12-06 20:50:23.028011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.028200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:05.964 [2024-12-06 20:50:23.028219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.742 ms 00:23:05.964 [2024-12-06 20:50:23.028228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.028331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.028349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:05.964 [2024-12-06 20:50:23.028359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:05.964 [2024-12-06 20:50:23.028367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.034954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.035099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.964 [2024-12-06 20:50:23.035114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.520 ms 00:23:05.964 [2024-12-06 20:50:23.035126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.035198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.035208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.964 [2024-12-06 20:50:23.035216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:05.964 [2024-12-06 20:50:23.035223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.035265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.035276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:05.964 [2024-12-06 20:50:23.035285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:05.964 [2024-12-06 20:50:23.035292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.035320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:05.964 [2024-12-06 20:50:23.038809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.038938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.964 [2024-12-06 20:50:23.038958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:23:05.964 [2024-12-06 20:50:23.038966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.039002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.039011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:05.964 [2024-12-06 20:50:23.039019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:05.964 [2024-12-06 20:50:23.039026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.039053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:05.964 [2024-12-06 20:50:23.039075] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:05.964 [2024-12-06 20:50:23.039110] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:05.964 [2024-12-06 20:50:23.039127] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:05.964 [2024-12-06 20:50:23.039233] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:05.964 [2024-12-06 20:50:23.039244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:05.964 [2024-12-06 20:50:23.039254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:05.964 [2024-12-06 20:50:23.039264] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:05.964 [2024-12-06 20:50:23.039273] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:05.964 [2024-12-06 20:50:23.039281] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:05.964 [2024-12-06 20:50:23.039289] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:05.964 [2024-12-06 20:50:23.039298] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:05.964 [2024-12-06 20:50:23.039305] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:05.964 [2024-12-06 20:50:23.039313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.964 [2024-12-06 20:50:23.039321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:05.964 [2024-12-06 20:50:23.039330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:23:05.964 [2024-12-06 20:50:23.039337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.964 [2024-12-06 20:50:23.039419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.965 [2024-12-06 20:50:23.039428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:05.965 [2024-12-06 20:50:23.039435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:05.965 [2024-12-06 20:50:23.039442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.965 [2024-12-06 20:50:23.039557] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:05.965 [2024-12-06 20:50:23.039568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:05.965 [2024-12-06 20:50:23.039576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:05.965 [2024-12-06 20:50:23.039600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:05.965 [2024-12-06 20:50:23.039621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:05.965 [2024-12-06 20:50:23.039634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:05.965 [2024-12-06 20:50:23.039641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:05.965 [2024-12-06 20:50:23.039648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:05.965 [2024-12-06 20:50:23.039661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:05.965 [2024-12-06 20:50:23.039667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:05.965 [2024-12-06 20:50:23.039673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:05.965 [2024-12-06 20:50:23.039686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:05.965 [2024-12-06 20:50:23.039709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:05.965 [2024-12-06 20:50:23.039729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:05.965 [2024-12-06 20:50:23.039748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:05.965 [2024-12-06 20:50:23.039766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:05.965 [2024-12-06 20:50:23.039785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:05.965 [2024-12-06 20:50:23.039799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:05.965 [2024-12-06 20:50:23.039806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:05.965 [2024-12-06 20:50:23.039812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:05.965 [2024-12-06 20:50:23.039818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:05.965 [2024-12-06 20:50:23.039824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:05.965 [2024-12-06 20:50:23.039830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:05.965 [2024-12-06 20:50:23.039844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:05.965 [2024-12-06 20:50:23.039851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039858] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:05.965 [2024-12-06 20:50:23.039865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:05.965 [2024-12-06 20:50:23.039873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.965 [2024-12-06 20:50:23.039901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:05.965 [2024-12-06 20:50:23.039909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:05.965 [2024-12-06 20:50:23.039915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:05.965 [2024-12-06 20:50:23.039922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:05.965 [2024-12-06 20:50:23.039929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:05.965 [2024-12-06 20:50:23.039937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:05.965 [2024-12-06 20:50:23.039946] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:05.965 [2024-12-06 20:50:23.039955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.039966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:05.965 [2024-12-06 20:50:23.039974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:05.965 [2024-12-06 20:50:23.039981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:05.965 [2024-12-06 20:50:23.039988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:05.965 [2024-12-06 20:50:23.039996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:05.965 [2024-12-06 20:50:23.040003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:05.965 [2024-12-06 20:50:23.040011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:05.965 [2024-12-06 20:50:23.040018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:05.965 [2024-12-06 20:50:23.040025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:05.965 [2024-12-06 20:50:23.040032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.040039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.040047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.040054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.040061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:05.965 [2024-12-06 20:50:23.040068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:05.965 [2024-12-06 20:50:23.040076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.040084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:05.965 [2024-12-06 20:50:23.040091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:05.965 [2024-12-06 20:50:23.040099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:05.965 [2024-12-06 20:50:23.040106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:05.965 [2024-12-06 20:50:23.040113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.965 [2024-12-06 20:50:23.040120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:05.965 [2024-12-06 20:50:23.040127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:23:05.965 [2024-12-06 20:50:23.040134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.965 [2024-12-06 20:50:23.069206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.965 [2024-12-06 20:50:23.069243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.965 [2024-12-06 20:50:23.069254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.007 ms 00:23:05.965 [2024-12-06 20:50:23.069266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.965 [2024-12-06 20:50:23.069352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.965 [2024-12-06 20:50:23.069360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:05.965 [2024-12-06 20:50:23.069369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:05.965 [2024-12-06 20:50:23.069376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.109506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.109546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.228 [2024-12-06 20:50:23.109558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.076 ms 00:23:06.228 [2024-12-06 20:50:23.109566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.109607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.109617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.228 [2024-12-06 20:50:23.109629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:06.228 [2024-12-06 20:50:23.109637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.110123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.110140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.228 [2024-12-06 20:50:23.110151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:23:06.228 [2024-12-06 20:50:23.110160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.110291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.110301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.228 [2024-12-06 20:50:23.110313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:06.228 [2024-12-06 20:50:23.110321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.124533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.124566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.228 [2024-12-06 20:50:23.124575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.192 ms 00:23:06.228 [2024-12-06 20:50:23.124583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.137658] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:06.228 [2024-12-06 20:50:23.137692] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:06.228 [2024-12-06 20:50:23.137704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.137712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:06.228 [2024-12-06 20:50:23.137721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.028 ms 00:23:06.228 [2024-12-06 20:50:23.137730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.162093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.162125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:06.228 [2024-12-06 20:50:23.162137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.323 ms 00:23:06.228 [2024-12-06 20:50:23.162146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.173730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.173760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:06.228 [2024-12-06 20:50:23.173770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.539 ms 00:23:06.228 [2024-12-06 20:50:23.173778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.184657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.184688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:06.228 [2024-12-06 20:50:23.184698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.847 ms 00:23:06.228 [2024-12-06 20:50:23.184705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.185315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.185341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:06.228 [2024-12-06 20:50:23.185353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:23:06.228 [2024-12-06 20:50:23.185361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.244808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.244851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:06.228 [2024-12-06 20:50:23.244869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.430 ms 00:23:06.228 [2024-12-06 20:50:23.244877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.255585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:06.228 [2024-12-06 20:50:23.258508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.258540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:06.228 [2024-12-06 20:50:23.258552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.563 ms 00:23:06.228 [2024-12-06 20:50:23.258560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.258643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.258654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:06.228 [2024-12-06 20:50:23.258667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:06.228 [2024-12-06 20:50:23.258676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.258746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.258757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:06.228 [2024-12-06 20:50:23.258765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:06.228 [2024-12-06 20:50:23.258773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.258792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.258801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:06.228 [2024-12-06 20:50:23.258810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:06.228 [2024-12-06 20:50:23.258817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.258852] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:06.228 [2024-12-06 20:50:23.258863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.258871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:06.228 [2024-12-06 20:50:23.258878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:06.228 [2024-12-06 20:50:23.258886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.282479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.282601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:06.228 [2024-12-06 20:50:23.282663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.427 ms 00:23:06.228 [2024-12-06 20:50:23.282686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.282753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.228 [2024-12-06 20:50:23.282764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:06.228 [2024-12-06 20:50:23.282772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:06.228 [2024-12-06 20:50:23.282780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.228 [2024-12-06 20:50:23.283867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.109 ms, result 0 00:23:07.173  [2024-12-06T20:50:25.690Z] Copying: 41/1024 [MB] (41 MBps) [2024-12-06T20:50:26.631Z] Copying: 63/1024 [MB] (21 MBps) [2024-12-06T20:50:27.573Z] Copying: 81/1024 [MB] (18 MBps) [2024-12-06T20:50:28.515Z] Copying: 105/1024 [MB] (23 MBps) [2024-12-06T20:50:29.447Z] Copying: 116/1024 [MB] (10 MBps) [2024-12-06T20:50:30.380Z] Copying: 126/1024 [MB] (10 MBps) [2024-12-06T20:50:31.320Z] Copying: 139/1024 [MB] (12 MBps) [2024-12-06T20:50:32.707Z] Copying: 161/1024 [MB] (22 MBps) [2024-12-06T20:50:33.649Z] Copying: 177/1024 [MB] (15 MBps) [2024-12-06T20:50:34.583Z] Copying: 194/1024 [MB] (17 MBps) [2024-12-06T20:50:35.521Z] Copying: 215/1024 [MB] (21 MBps) [2024-12-06T20:50:36.494Z] Copying: 246/1024 [MB] (30 MBps) [2024-12-06T20:50:37.436Z] Copying: 263/1024 [MB] (17 MBps) [2024-12-06T20:50:38.379Z] Copying: 280/1024 [MB] (16 MBps) [2024-12-06T20:50:39.322Z] Copying: 299/1024 [MB] (19 MBps) [2024-12-06T20:50:40.696Z] Copying: 318/1024 [MB] (18 MBps) [2024-12-06T20:50:41.626Z] Copying: 330/1024 [MB] (12 MBps) [2024-12-06T20:50:42.559Z] Copying: 346/1024 [MB] (16 MBps) [2024-12-06T20:50:43.496Z] Copying: 358/1024 [MB] (12 MBps) [2024-12-06T20:50:44.437Z] Copying: 373/1024 [MB] (15 MBps) [2024-12-06T20:50:45.378Z] Copying: 391/1024 [MB] (17 MBps) [2024-12-06T20:50:46.320Z] Copying: 402/1024 [MB] (10 MBps) [2024-12-06T20:50:47.700Z] Copying: 412/1024 [MB] (10 MBps) [2024-12-06T20:50:48.639Z] Copying: 425/1024 [MB] (12 MBps) [2024-12-06T20:50:49.582Z] Copying: 445872/1048576 [kB] (10024 kBps) [2024-12-06T20:50:50.527Z] Copying: 445/1024 [MB] (10 MBps) [2024-12-06T20:50:51.483Z] Copying: 460/1024 [MB] (15 MBps) [2024-12-06T20:50:52.429Z] Copying: 486/1024 [MB] (26 MBps) [2024-12-06T20:50:53.372Z] Copying: 497/1024 [MB] (10 MBps) [2024-12-06T20:50:54.316Z] Copying: 507/1024 [MB] (10 MBps) [2024-12-06T20:50:55.705Z] Copying: 529600/1048576 [kB] (10180 kBps) [2024-12-06T20:50:56.647Z] Copying: 527/1024 [MB] (10 MBps) [2024-12-06T20:50:57.591Z] Copying: 537/1024 [MB] (10 MBps) [2024-12-06T20:50:58.535Z] Copying: 559/1024 [MB] (22 MBps) [2024-12-06T20:50:59.493Z] Copying: 580/1024 [MB] (21 MBps) [2024-12-06T20:51:00.439Z] Copying: 597/1024 [MB] (17 MBps) [2024-12-06T20:51:01.383Z] Copying: 614/1024 [MB] (17 MBps) [2024-12-06T20:51:02.324Z] Copying: 632/1024 [MB] (17 MBps) [2024-12-06T20:51:03.708Z] Copying: 657/1024 [MB] (25 MBps) [2024-12-06T20:51:04.651Z] Copying: 679/1024 [MB] (21 MBps) [2024-12-06T20:51:05.593Z] Copying: 696/1024 [MB] (17 MBps) [2024-12-06T20:51:06.532Z] Copying: 712/1024 [MB] (15 MBps) [2024-12-06T20:51:07.501Z] Copying: 728/1024 [MB] (16 MBps) [2024-12-06T20:51:08.448Z] Copying: 739/1024 [MB] (11 MBps) [2024-12-06T20:51:09.392Z] Copying: 755/1024 [MB] (15 MBps) [2024-12-06T20:51:10.333Z] Copying: 768/1024 [MB] (12 MBps) [2024-12-06T20:51:11.715Z] Copying: 785/1024 [MB] (16 MBps) [2024-12-06T20:51:12.661Z] Copying: 810/1024 [MB] (25 MBps) [2024-12-06T20:51:13.607Z] Copying: 822/1024 [MB] (11 MBps) [2024-12-06T20:51:14.551Z] Copying: 838/1024 [MB] (16 MBps) [2024-12-06T20:51:15.490Z] Copying: 854/1024 [MB] (15 MBps) [2024-12-06T20:51:16.435Z] Copying: 866/1024 [MB] (12 MBps) [2024-12-06T20:51:17.376Z] Copying: 880/1024 [MB] (13 MBps) [2024-12-06T20:51:18.320Z] Copying: 890/1024 [MB] (10 MBps) [2024-12-06T20:51:19.709Z] Copying: 902/1024 [MB] (11 MBps) [2024-12-06T20:51:20.653Z] Copying: 915/1024 [MB] (13 MBps) [2024-12-06T20:51:21.596Z] Copying: 931/1024 [MB] (15 MBps) [2024-12-06T20:51:22.602Z] Copying: 943/1024 [MB] (12 MBps) [2024-12-06T20:51:23.546Z] Copying: 957/1024 [MB] (13 MBps) [2024-12-06T20:51:24.491Z] Copying: 968/1024 [MB] (11 MBps) [2024-12-06T20:51:25.432Z] Copying: 984/1024 [MB] (15 MBps) [2024-12-06T20:51:26.367Z] Copying: 995/1024 [MB] (11 MBps) [2024-12-06T20:51:27.311Z] Copying: 1005/1024 [MB] (10 MBps) [2024-12-06T20:51:28.255Z] Copying: 1015/1024 [MB] (10 MBps) [2024-12-06T20:51:28.255Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 20:51:28.087503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.087569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:11.122 [2024-12-06 20:51:28.087585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:11.122 [2024-12-06 20:51:28.087593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.087616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:11.122 [2024-12-06 20:51:28.090710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.090933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:11.122 [2024-12-06 20:51:28.090957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:24:11.122 [2024-12-06 20:51:28.090966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.094193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.094363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:11.122 [2024-12-06 20:51:28.094382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:24:11.122 [2024-12-06 20:51:28.094391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.113182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.113346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:11.122 [2024-12-06 20:51:28.113367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.771 ms 00:24:11.122 [2024-12-06 20:51:28.113383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.119631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.119674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:11.122 [2024-12-06 20:51:28.119687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:24:11.122 [2024-12-06 20:51:28.119696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.146526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.146578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:11.122 [2024-12-06 20:51:28.146591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.765 ms 00:24:11.122 [2024-12-06 20:51:28.146599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.162612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.162804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:11.122 [2024-12-06 20:51:28.162828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.967 ms 00:24:11.122 [2024-12-06 20:51:28.162837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.165182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.165224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:11.122 [2024-12-06 20:51:28.165235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:24:11.122 [2024-12-06 20:51:28.165243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.191357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.191405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:11.122 [2024-12-06 20:51:28.191417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.100 ms 00:24:11.122 [2024-12-06 20:51:28.191425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.216817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.216863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:11.122 [2024-12-06 20:51:28.216875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.346 ms 00:24:11.122 [2024-12-06 20:51:28.216881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.122 [2024-12-06 20:51:28.238922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.122 [2024-12-06 20:51:28.238964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:11.122 [2024-12-06 20:51:28.238974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.971 ms 00:24:11.122 [2024-12-06 20:51:28.238980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.385 [2024-12-06 20:51:28.258166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.385 [2024-12-06 20:51:28.258204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:11.385 [2024-12-06 20:51:28.258214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.121 ms 00:24:11.385 [2024-12-06 20:51:28.258220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.385 [2024-12-06 20:51:28.258255] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:11.385 [2024-12-06 20:51:28.258274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:24:11.386 [2024-12-06 20:51:28.258285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:11.386 [2024-12-06 20:51:28.258794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:11.387 [2024-12-06 20:51:28.258870] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:11.387 [2024-12-06 20:51:28.258877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d 00:24:11.387 [2024-12-06 20:51:28.258884] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:24:11.387 [2024-12-06 20:51:28.258911] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:24:11.387 [2024-12-06 20:51:28.258917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:24:11.387 [2024-12-06 20:51:28.258925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:24:11.387 [2024-12-06 20:51:28.258937] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:11.387 [2024-12-06 20:51:28.258943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:11.387 [2024-12-06 20:51:28.258949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:11.387 [2024-12-06 20:51:28.258954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:11.387 [2024-12-06 20:51:28.258959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:11.387 [2024-12-06 20:51:28.258965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.387 [2024-12-06 20:51:28.258971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:11.387 [2024-12-06 20:51:28.258977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:24:11.387 [2024-12-06 20:51:28.258985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.269358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.387 [2024-12-06 20:51:28.269391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:11.387 [2024-12-06 20:51:28.269399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.347 ms 00:24:11.387 [2024-12-06 20:51:28.269405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.269707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.387 [2024-12-06 20:51:28.269726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:11.387 [2024-12-06 20:51:28.269733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:24:11.387 [2024-12-06 20:51:28.269739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.297443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.297475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.387 [2024-12-06 20:51:28.297484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.297491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.297537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.297547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.387 [2024-12-06 20:51:28.297553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.297559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.297612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.297620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.387 [2024-12-06 20:51:28.297626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.297632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.297644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.297650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.387 [2024-12-06 20:51:28.297658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.297664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.359122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.359156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.387 [2024-12-06 20:51:28.359166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.359173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.387 [2024-12-06 20:51:28.408313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.387 [2024-12-06 20:51:28.408386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.387 [2024-12-06 20:51:28.408432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.387 [2024-12-06 20:51:28.408523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:11.387 [2024-12-06 20:51:28.408563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.387 [2024-12-06 20:51:28.408612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.387 [2024-12-06 20:51:28.408656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.387 [2024-12-06 20:51:28.408662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.387 [2024-12-06 20:51:28.408670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.387 [2024-12-06 20:51:28.408757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.241 ms, result 0 00:24:12.330 00:24:12.330 00:24:12.330 20:51:29 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:12.330 [2024-12-06 20:51:29.238750] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:24:12.330 [2024-12-06 20:51:29.238871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79009 ] 00:24:12.330 [2024-12-06 20:51:29.393081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.589 [2024-12-06 20:51:29.468728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.589 [2024-12-06 20:51:29.679049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:12.589 [2024-12-06 20:51:29.679103] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:12.852 [2024-12-06 20:51:29.826463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.826621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:12.852 [2024-12-06 20:51:29.826641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:12.852 [2024-12-06 20:51:29.826649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.826705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.826718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:12.852 [2024-12-06 20:51:29.826726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:12.852 [2024-12-06 20:51:29.826734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.826753] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:12.852 [2024-12-06 20:51:29.827469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:12.852 [2024-12-06 20:51:29.827486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.827494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:12.852 [2024-12-06 20:51:29.827502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:24:12.852 [2024-12-06 20:51:29.827509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.828589] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:12.852 [2024-12-06 20:51:29.841212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.841346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:12.852 [2024-12-06 20:51:29.841364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.625 ms 00:24:12.852 [2024-12-06 20:51:29.841372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.841668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.841699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:12.852 [2024-12-06 20:51:29.841711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:12.852 [2024-12-06 20:51:29.841721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.846944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.846978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:12.852 [2024-12-06 20:51:29.846988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.138 ms 00:24:12.852 [2024-12-06 20:51:29.847000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.847070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.847079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:12.852 [2024-12-06 20:51:29.847087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:12.852 [2024-12-06 20:51:29.847094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.847132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.847142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:12.852 [2024-12-06 20:51:29.847150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:12.852 [2024-12-06 20:51:29.847157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.847181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:12.852 [2024-12-06 20:51:29.850541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.850572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:12.852 [2024-12-06 20:51:29.850584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:24:12.852 [2024-12-06 20:51:29.850591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.850623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.852 [2024-12-06 20:51:29.850631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:12.852 [2024-12-06 20:51:29.850639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:12.852 [2024-12-06 20:51:29.850646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.852 [2024-12-06 20:51:29.850665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:12.852 [2024-12-06 20:51:29.850683] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:12.852 [2024-12-06 20:51:29.850718] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:12.852 [2024-12-06 20:51:29.850735] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:12.852 [2024-12-06 20:51:29.850837] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:12.852 [2024-12-06 20:51:29.850847] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:12.852 [2024-12-06 20:51:29.850857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:12.852 [2024-12-06 20:51:29.850867] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:12.852 [2024-12-06 20:51:29.850875] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:12.852 [2024-12-06 20:51:29.850884] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:12.852 [2024-12-06 20:51:29.850912] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:12.852 [2024-12-06 20:51:29.850922] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:12.852 [2024-12-06 20:51:29.850929] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:12.852 [2024-12-06 20:51:29.850937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.850949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:12.853 [2024-12-06 20:51:29.850956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:12.853 [2024-12-06 20:51:29.850964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.851070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.851081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:12.853 [2024-12-06 20:51:29.851089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:12.853 [2024-12-06 20:51:29.851101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.851229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:12.853 [2024-12-06 20:51:29.851240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:12.853 [2024-12-06 20:51:29.851248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:12.853 [2024-12-06 20:51:29.851269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:12.853 [2024-12-06 20:51:29.851289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.853 [2024-12-06 20:51:29.851302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:12.853 [2024-12-06 20:51:29.851309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:12.853 [2024-12-06 20:51:29.851317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.853 [2024-12-06 20:51:29.851330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:12.853 [2024-12-06 20:51:29.851337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:12.853 [2024-12-06 20:51:29.851343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:12.853 [2024-12-06 20:51:29.851356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:12.853 [2024-12-06 20:51:29.851376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:12.853 [2024-12-06 20:51:29.851395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:12.853 [2024-12-06 20:51:29.851413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:12.853 [2024-12-06 20:51:29.851432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:12.853 [2024-12-06 20:51:29.851454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.853 [2024-12-06 20:51:29.851467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:12.853 [2024-12-06 20:51:29.851473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:12.853 [2024-12-06 20:51:29.851480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.853 [2024-12-06 20:51:29.851486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:12.853 [2024-12-06 20:51:29.851492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:12.853 [2024-12-06 20:51:29.851503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:12.853 [2024-12-06 20:51:29.851516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:12.853 [2024-12-06 20:51:29.851529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851535] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:12.853 [2024-12-06 20:51:29.851543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:12.853 [2024-12-06 20:51:29.851550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.853 [2024-12-06 20:51:29.851568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:12.853 [2024-12-06 20:51:29.851575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:12.853 [2024-12-06 20:51:29.851581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:12.853 [2024-12-06 20:51:29.851587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:12.853 [2024-12-06 20:51:29.851594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:12.853 [2024-12-06 20:51:29.851600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:12.853 [2024-12-06 20:51:29.851608] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:12.853 [2024-12-06 20:51:29.851617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:12.853 [2024-12-06 20:51:29.851635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:12.853 [2024-12-06 20:51:29.851642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:12.853 [2024-12-06 20:51:29.851648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:12.853 [2024-12-06 20:51:29.851655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:12.853 [2024-12-06 20:51:29.851662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:12.853 [2024-12-06 20:51:29.851669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:12.853 [2024-12-06 20:51:29.851676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:12.853 [2024-12-06 20:51:29.851682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:12.853 [2024-12-06 20:51:29.851689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:12.853 [2024-12-06 20:51:29.851725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:12.853 [2024-12-06 20:51:29.851732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:12.853 [2024-12-06 20:51:29.851747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:12.853 [2024-12-06 20:51:29.851754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:12.853 [2024-12-06 20:51:29.851761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:12.853 [2024-12-06 20:51:29.851768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.851776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:12.853 [2024-12-06 20:51:29.851784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:24:12.853 [2024-12-06 20:51:29.851791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.878761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.878927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.853 [2024-12-06 20:51:29.878944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.913 ms 00:24:12.853 [2024-12-06 20:51:29.878957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.879042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.879051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:12.853 [2024-12-06 20:51:29.879059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:12.853 [2024-12-06 20:51:29.879066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.928295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.929468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.853 [2024-12-06 20:51:29.929495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.176 ms 00:24:12.853 [2024-12-06 20:51:29.929507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.929560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.853 [2024-12-06 20:51:29.929570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.853 [2024-12-06 20:51:29.929586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.853 [2024-12-06 20:51:29.929595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.853 [2024-12-06 20:51:29.930108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.854 [2024-12-06 20:51:29.930137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.854 [2024-12-06 20:51:29.930148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:24:12.854 [2024-12-06 20:51:29.930155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.854 [2024-12-06 20:51:29.930302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.854 [2024-12-06 20:51:29.930319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.854 [2024-12-06 20:51:29.930333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:24:12.854 [2024-12-06 20:51:29.930341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.854 [2024-12-06 20:51:29.945231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.854 [2024-12-06 20:51:29.945276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.854 [2024-12-06 20:51:29.945288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.870 ms 00:24:12.854 [2024-12-06 20:51:29.945296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.854 [2024-12-06 20:51:29.959396] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:24:12.854 [2024-12-06 20:51:29.959446] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:12.854 [2024-12-06 20:51:29.959460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.854 [2024-12-06 20:51:29.959468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:12.854 [2024-12-06 20:51:29.959478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.053 ms 00:24:12.854 [2024-12-06 20:51:29.959485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:29.985705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:29.985760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:13.116 [2024-12-06 20:51:29.985774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.164 ms 00:24:13.116 [2024-12-06 20:51:29.985782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:29.999192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:29.999381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:13.116 [2024-12-06 20:51:29.999403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.344 ms 00:24:13.116 [2024-12-06 20:51:29.999411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.012588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.012646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:13.116 [2024-12-06 20:51:30.012660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.048 ms 00:24:13.116 [2024-12-06 20:51:30.012669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.013377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.013405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:13.116 [2024-12-06 20:51:30.013420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:24:13.116 [2024-12-06 20:51:30.013428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.079761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.079838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:13.116 [2024-12-06 20:51:30.079864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.309 ms 00:24:13.116 [2024-12-06 20:51:30.079873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.091831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:13.116 [2024-12-06 20:51:30.095292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.095342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:13.116 [2024-12-06 20:51:30.095355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.307 ms 00:24:13.116 [2024-12-06 20:51:30.095364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.095467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.095478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:13.116 [2024-12-06 20:51:30.095492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:13.116 [2024-12-06 20:51:30.095500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.096434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.096478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:13.116 [2024-12-06 20:51:30.096490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:24:13.116 [2024-12-06 20:51:30.096500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.096536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.096552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:13.116 [2024-12-06 20:51:30.096563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:13.116 [2024-12-06 20:51:30.096573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.096620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:13.116 [2024-12-06 20:51:30.096631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.096639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:13.116 [2024-12-06 20:51:30.096649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:13.116 [2024-12-06 20:51:30.096657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.123447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.123503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:13.116 [2024-12-06 20:51:30.123524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.771 ms 00:24:13.116 [2024-12-06 20:51:30.123533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.123625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.116 [2024-12-06 20:51:30.123636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:13.116 [2024-12-06 20:51:30.123646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:13.116 [2024-12-06 20:51:30.123655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.116 [2024-12-06 20:51:30.125286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.266 ms, result 0 00:24:14.503  [2024-12-06T20:51:32.577Z] Copying: 1032/1048576 [kB] (1032 kBps) [2024-12-06T20:51:33.521Z] Copying: 10196/1048576 [kB] (9164 kBps) [2024-12-06T20:51:34.463Z] Copying: 21/1024 [MB] (11 MBps) [2024-12-06T20:51:35.398Z] Copying: 34/1024 [MB] (12 MBps) [2024-12-06T20:51:36.334Z] Copying: 49/1024 [MB] (15 MBps) [2024-12-06T20:51:37.744Z] Copying: 67/1024 [MB] (17 MBps) [2024-12-06T20:51:38.685Z] Copying: 77/1024 [MB] (10 MBps) [2024-12-06T20:51:39.628Z] Copying: 88/1024 [MB] (10 MBps) [2024-12-06T20:51:40.571Z] Copying: 98/1024 [MB] (10 MBps) [2024-12-06T20:51:41.515Z] Copying: 110/1024 [MB] (11 MBps) [2024-12-06T20:51:42.459Z] Copying: 129/1024 [MB] (19 MBps) [2024-12-06T20:51:43.403Z] Copying: 147/1024 [MB] (17 MBps) [2024-12-06T20:51:44.347Z] Copying: 159/1024 [MB] (11 MBps) [2024-12-06T20:51:45.732Z] Copying: 170/1024 [MB] (10 MBps) [2024-12-06T20:51:46.669Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-06T20:51:47.613Z] Copying: 191/1024 [MB] (10 MBps) [2024-12-06T20:51:48.556Z] Copying: 201/1024 [MB] (10 MBps) [2024-12-06T20:51:49.490Z] Copying: 212/1024 [MB] (10 MBps) [2024-12-06T20:51:50.443Z] Copying: 224/1024 [MB] (11 MBps) [2024-12-06T20:51:51.383Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-06T20:51:52.756Z] Copying: 246/1024 [MB] (10 MBps) [2024-12-06T20:51:53.323Z] Copying: 260/1024 [MB] (14 MBps) [2024-12-06T20:51:54.707Z] Copying: 271/1024 [MB] (11 MBps) [2024-12-06T20:51:55.651Z] Copying: 283/1024 [MB] (11 MBps) [2024-12-06T20:51:56.617Z] Copying: 296/1024 [MB] (13 MBps) [2024-12-06T20:51:57.563Z] Copying: 307/1024 [MB] (10 MBps) [2024-12-06T20:51:58.508Z] Copying: 317/1024 [MB] (10 MBps) [2024-12-06T20:51:59.452Z] Copying: 335752/1048576 [kB] (10224 kBps) [2024-12-06T20:52:00.397Z] Copying: 339/1024 [MB] (11 MBps) [2024-12-06T20:52:01.342Z] Copying: 350/1024 [MB] (10 MBps) [2024-12-06T20:52:02.731Z] Copying: 362/1024 [MB] (12 MBps) [2024-12-06T20:52:03.676Z] Copying: 373/1024 [MB] (10 MBps) [2024-12-06T20:52:04.619Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-06T20:52:05.581Z] Copying: 394/1024 [MB] (10 MBps) [2024-12-06T20:52:06.522Z] Copying: 405/1024 [MB] (11 MBps) [2024-12-06T20:52:07.467Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-06T20:52:08.412Z] Copying: 427/1024 [MB] (10 MBps) [2024-12-06T20:52:09.356Z] Copying: 444/1024 [MB] (16 MBps) [2024-12-06T20:52:10.746Z] Copying: 456/1024 [MB] (12 MBps) [2024-12-06T20:52:11.320Z] Copying: 473/1024 [MB] (17 MBps) [2024-12-06T20:52:12.711Z] Copying: 493/1024 [MB] (19 MBps) [2024-12-06T20:52:13.651Z] Copying: 510/1024 [MB] (16 MBps) [2024-12-06T20:52:14.596Z] Copying: 527/1024 [MB] (16 MBps) [2024-12-06T20:52:15.542Z] Copying: 543/1024 [MB] (16 MBps) [2024-12-06T20:52:16.485Z] Copying: 556/1024 [MB] (13 MBps) [2024-12-06T20:52:17.430Z] Copying: 575/1024 [MB] (19 MBps) [2024-12-06T20:52:18.371Z] Copying: 594/1024 [MB] (18 MBps) [2024-12-06T20:52:19.756Z] Copying: 610/1024 [MB] (16 MBps) [2024-12-06T20:52:20.330Z] Copying: 625/1024 [MB] (14 MBps) [2024-12-06T20:52:21.720Z] Copying: 647/1024 [MB] (21 MBps) [2024-12-06T20:52:22.665Z] Copying: 665/1024 [MB] (18 MBps) [2024-12-06T20:52:23.611Z] Copying: 685/1024 [MB] (19 MBps) [2024-12-06T20:52:24.556Z] Copying: 701/1024 [MB] (16 MBps) [2024-12-06T20:52:25.502Z] Copying: 713/1024 [MB] (11 MBps) [2024-12-06T20:52:26.444Z] Copying: 733/1024 [MB] (20 MBps) [2024-12-06T20:52:27.383Z] Copying: 744/1024 [MB] (10 MBps) [2024-12-06T20:52:28.337Z] Copying: 755/1024 [MB] (10 MBps) [2024-12-06T20:52:29.718Z] Copying: 766/1024 [MB] (11 MBps) [2024-12-06T20:52:30.659Z] Copying: 777/1024 [MB] (10 MBps) [2024-12-06T20:52:31.605Z] Copying: 803/1024 [MB] (25 MBps) [2024-12-06T20:52:32.550Z] Copying: 819/1024 [MB] (15 MBps) [2024-12-06T20:52:33.491Z] Copying: 832/1024 [MB] (13 MBps) [2024-12-06T20:52:34.435Z] Copying: 844/1024 [MB] (11 MBps) [2024-12-06T20:52:35.374Z] Copying: 860/1024 [MB] (16 MBps) [2024-12-06T20:52:36.762Z] Copying: 874/1024 [MB] (14 MBps) [2024-12-06T20:52:37.334Z] Copying: 899/1024 [MB] (24 MBps) [2024-12-06T20:52:38.719Z] Copying: 917/1024 [MB] (18 MBps) [2024-12-06T20:52:39.680Z] Copying: 930/1024 [MB] (12 MBps) [2024-12-06T20:52:40.623Z] Copying: 950/1024 [MB] (19 MBps) [2024-12-06T20:52:41.570Z] Copying: 962/1024 [MB] (12 MBps) [2024-12-06T20:52:42.511Z] Copying: 973/1024 [MB] (11 MBps) [2024-12-06T20:52:43.458Z] Copying: 990/1024 [MB] (16 MBps) [2024-12-06T20:52:44.427Z] Copying: 1002/1024 [MB] (11 MBps) [2024-12-06T20:52:45.374Z] Copying: 1016/1024 [MB] (14 MBps) [2024-12-06T20:52:45.374Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-06 20:52:45.227753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.227827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:28.241 [2024-12-06 20:52:45.227849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.241 [2024-12-06 20:52:45.227859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.241 [2024-12-06 20:52:45.227913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.241 [2024-12-06 20:52:45.234968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.235055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:28.241 [2024-12-06 20:52:45.235089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.028 ms 00:25:28.241 [2024-12-06 20:52:45.235114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.241 [2024-12-06 20:52:45.235681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.235731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:28.241 [2024-12-06 20:52:45.235762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:25:28.241 [2024-12-06 20:52:45.235802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.241 [2024-12-06 20:52:45.249127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.249192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:28.241 [2024-12-06 20:52:45.249210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.279 ms 00:25:28.241 [2024-12-06 20:52:45.249222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.241 [2024-12-06 20:52:45.255529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.255591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:28.241 [2024-12-06 20:52:45.255610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.254 ms 00:25:28.241 [2024-12-06 20:52:45.255632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.241 [2024-12-06 20:52:45.283071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.283139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:28.241 [2024-12-06 20:52:45.283159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.371 ms 00:25:28.241 [2024-12-06 20:52:45.283171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.241 [2024-12-06 20:52:45.300106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.241 [2024-12-06 20:52:45.300195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:28.241 [2024-12-06 20:52:45.300214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.871 ms 00:25:28.241 [2024-12-06 20:52:45.300227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.502 [2024-12-06 20:52:45.499198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.502 [2024-12-06 20:52:45.499280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:28.502 [2024-12-06 20:52:45.499300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 198.907 ms 00:25:28.502 [2024-12-06 20:52:45.499313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.502 [2024-12-06 20:52:45.525724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.502 [2024-12-06 20:52:45.525791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:28.502 [2024-12-06 20:52:45.525810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.386 ms 00:25:28.502 [2024-12-06 20:52:45.525822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.502 [2024-12-06 20:52:45.552593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.502 [2024-12-06 20:52:45.552657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:28.502 [2024-12-06 20:52:45.552677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.616 ms 00:25:28.502 [2024-12-06 20:52:45.552687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.502 [2024-12-06 20:52:45.578149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.502 [2024-12-06 20:52:45.578212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:28.502 [2024-12-06 20:52:45.578231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.400 ms 00:25:28.502 [2024-12-06 20:52:45.578242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.502 [2024-12-06 20:52:45.603995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.502 [2024-12-06 20:52:45.604055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:28.502 [2024-12-06 20:52:45.604075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.630 ms 00:25:28.502 [2024-12-06 20:52:45.604086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.502 [2024-12-06 20:52:45.604155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:28.502 [2024-12-06 20:52:45.604179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:25:28.502 [2024-12-06 20:52:45.604195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:28.502 [2024-12-06 20:52:45.604559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.604993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:28.503 [2024-12-06 20:52:45.605578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:28.503 [2024-12-06 20:52:45.605591] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5bfd91c7-c6a7-4ebc-88c6-cb1b2f1b017d 00:25:28.503 [2024-12-06 20:52:45.605605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:25:28.503 [2024-12-06 20:52:45.605617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132032 00:25:28.503 [2024-12-06 20:52:45.605630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131072 00:25:28.503 [2024-12-06 20:52:45.605647] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:25:28.503 [2024-12-06 20:52:45.605681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:28.503 [2024-12-06 20:52:45.605705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:28.503 [2024-12-06 20:52:45.605719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:28.503 [2024-12-06 20:52:45.605730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:28.503 [2024-12-06 20:52:45.605742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:28.503 [2024-12-06 20:52:45.605760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.503 [2024-12-06 20:52:45.605773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:28.503 [2024-12-06 20:52:45.605788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:25:28.503 [2024-12-06 20:52:45.605801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.503 [2024-12-06 20:52:45.619629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.503 [2024-12-06 20:52:45.619688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:28.503 [2024-12-06 20:52:45.619714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.772 ms 00:25:28.503 [2024-12-06 20:52:45.619726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.503 [2024-12-06 20:52:45.620239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.503 [2024-12-06 20:52:45.620286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:28.504 [2024-12-06 20:52:45.620302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:25:28.504 [2024-12-06 20:52:45.620317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.764 [2024-12-06 20:52:45.656981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.764 [2024-12-06 20:52:45.657043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.764 [2024-12-06 20:52:45.657056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.764 [2024-12-06 20:52:45.657065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.764 [2024-12-06 20:52:45.657132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.764 [2024-12-06 20:52:45.657143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.764 [2024-12-06 20:52:45.657152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.657161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.657255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.657267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.765 [2024-12-06 20:52:45.657283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.657292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.657309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.657319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.765 [2024-12-06 20:52:45.657328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.657337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.741008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.741076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.765 [2024-12-06 20:52:45.741089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.741098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.810724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.810791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.765 [2024-12-06 20:52:45.810803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.810812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.810871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.810882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.765 [2024-12-06 20:52:45.810914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.810930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.810993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.811004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.765 [2024-12-06 20:52:45.811012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.811021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.811118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.811130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.765 [2024-12-06 20:52:45.811139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.811147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.811183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.811193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:28.765 [2024-12-06 20:52:45.811201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.811210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.811254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.811265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.765 [2024-12-06 20:52:45.811273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.811282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.811333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.765 [2024-12-06 20:52:45.811344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.765 [2024-12-06 20:52:45.811352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.765 [2024-12-06 20:52:45.811360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.765 [2024-12-06 20:52:45.811498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.708 ms, result 0 00:25:29.708 00:25:29.708 00:25:29.708 20:52:46 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.258 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77210 00:25:32.258 20:52:48 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77210 ']' 00:25:32.258 Process with pid 77210 is not found 00:25:32.258 20:52:48 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77210 00:25:32.258 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77210) - No such process 00:25:32.258 20:52:48 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77210 is not found' 00:25:32.258 Remove shared memory files 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:32.258 20:52:48 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:32.258 00:25:32.258 real 4m13.595s 00:25:32.258 user 4m2.368s 00:25:32.258 sys 0m11.248s 00:25:32.258 20:52:48 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.258 ************************************ 00:25:32.258 END TEST ftl_restore 00:25:32.258 20:52:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:32.258 ************************************ 00:25:32.258 20:52:49 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:32.258 20:52:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:32.258 20:52:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.258 20:52:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:32.258 ************************************ 00:25:32.258 START TEST ftl_dirty_shutdown 00:25:32.258 ************************************ 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:32.258 * Looking for test storage... 00:25:32.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.258 20:52:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:32.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.259 --rc genhtml_branch_coverage=1 00:25:32.259 --rc genhtml_function_coverage=1 00:25:32.259 --rc genhtml_legend=1 00:25:32.259 --rc geninfo_all_blocks=1 00:25:32.259 --rc geninfo_unexecuted_blocks=1 00:25:32.259 00:25:32.259 ' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:32.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.259 --rc genhtml_branch_coverage=1 00:25:32.259 --rc genhtml_function_coverage=1 00:25:32.259 --rc genhtml_legend=1 00:25:32.259 --rc geninfo_all_blocks=1 00:25:32.259 --rc geninfo_unexecuted_blocks=1 00:25:32.259 00:25:32.259 ' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:32.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.259 --rc genhtml_branch_coverage=1 00:25:32.259 --rc genhtml_function_coverage=1 00:25:32.259 --rc genhtml_legend=1 00:25:32.259 --rc geninfo_all_blocks=1 00:25:32.259 --rc geninfo_unexecuted_blocks=1 00:25:32.259 00:25:32.259 ' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:32.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.259 --rc genhtml_branch_coverage=1 00:25:32.259 --rc genhtml_function_coverage=1 00:25:32.259 --rc genhtml_legend=1 00:25:32.259 --rc geninfo_all_blocks=1 00:25:32.259 --rc geninfo_unexecuted_blocks=1 00:25:32.259 00:25:32.259 ' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79884 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79884 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79884 ']' 00:25:32.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.259 20:52:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.259 [2024-12-06 20:52:49.285097] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:25:32.259 [2024-12-06 20:52:49.285246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79884 ] 00:25:32.520 [2024-12-06 20:52:49.450383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.520 [2024-12-06 20:52:49.581081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:33.465 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:33.727 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:33.728 { 00:25:33.728 "name": "nvme0n1", 00:25:33.728 "aliases": [ 00:25:33.728 "543e537d-ae1a-46fc-9a84-15a6e3f24dca" 00:25:33.728 ], 00:25:33.728 "product_name": "NVMe disk", 00:25:33.728 "block_size": 4096, 00:25:33.728 "num_blocks": 1310720, 00:25:33.728 "uuid": "543e537d-ae1a-46fc-9a84-15a6e3f24dca", 00:25:33.728 "numa_id": -1, 00:25:33.728 "assigned_rate_limits": { 00:25:33.728 "rw_ios_per_sec": 0, 00:25:33.728 "rw_mbytes_per_sec": 0, 00:25:33.728 "r_mbytes_per_sec": 0, 00:25:33.728 "w_mbytes_per_sec": 0 00:25:33.728 }, 00:25:33.728 "claimed": true, 00:25:33.728 "claim_type": "read_many_write_one", 00:25:33.728 "zoned": false, 00:25:33.728 "supported_io_types": { 00:25:33.728 "read": true, 00:25:33.728 "write": true, 00:25:33.728 "unmap": true, 00:25:33.728 "flush": true, 00:25:33.728 "reset": true, 00:25:33.728 "nvme_admin": true, 00:25:33.728 "nvme_io": true, 00:25:33.728 "nvme_io_md": false, 00:25:33.728 "write_zeroes": true, 00:25:33.728 "zcopy": false, 00:25:33.728 "get_zone_info": false, 00:25:33.728 "zone_management": false, 00:25:33.728 "zone_append": false, 00:25:33.728 "compare": true, 00:25:33.728 "compare_and_write": false, 00:25:33.728 "abort": true, 00:25:33.728 "seek_hole": false, 00:25:33.728 "seek_data": false, 00:25:33.728 "copy": true, 00:25:33.728 "nvme_iov_md": false 00:25:33.728 }, 00:25:33.728 "driver_specific": { 00:25:33.728 "nvme": [ 00:25:33.728 { 00:25:33.728 "pci_address": "0000:00:11.0", 00:25:33.728 "trid": { 00:25:33.728 "trtype": "PCIe", 00:25:33.728 "traddr": "0000:00:11.0" 00:25:33.728 }, 00:25:33.728 "ctrlr_data": { 00:25:33.728 "cntlid": 0, 00:25:33.728 "vendor_id": "0x1b36", 00:25:33.728 "model_number": "QEMU NVMe Ctrl", 00:25:33.728 "serial_number": "12341", 00:25:33.728 "firmware_revision": "8.0.0", 00:25:33.728 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:33.728 "oacs": { 00:25:33.728 "security": 0, 00:25:33.728 "format": 1, 00:25:33.728 "firmware": 0, 00:25:33.728 "ns_manage": 1 00:25:33.728 }, 00:25:33.728 "multi_ctrlr": false, 00:25:33.728 "ana_reporting": false 00:25:33.728 }, 00:25:33.728 "vs": { 00:25:33.728 "nvme_version": "1.4" 00:25:33.728 }, 00:25:33.728 "ns_data": { 00:25:33.728 "id": 1, 00:25:33.728 "can_share": false 00:25:33.728 } 00:25:33.728 } 00:25:33.728 ], 00:25:33.728 "mp_policy": "active_passive" 00:25:33.728 } 00:25:33.728 } 00:25:33.728 ]' 00:25:33.728 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:33.728 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:33.728 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:33.990 20:52:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:33.990 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=092b3725-d6f8-4350-a688-0d9329755058 00:25:33.990 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:33.990 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 092b3725-d6f8-4350-a688-0d9329755058 00:25:34.252 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:34.513 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=4e7f705c-e454-4534-921b-e3f2dab44134 00:25:34.513 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4e7f705c-e454-4534-921b-e3f2dab44134 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=818ba016-88ff-4e10-881f-51e62d93df94 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 818ba016-88ff-4e10-881f-51e62d93df94 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=818ba016-88ff-4e10-881f-51e62d93df94 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 818ba016-88ff-4e10-881f-51e62d93df94 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=818ba016-88ff-4e10-881f-51e62d93df94 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:34.776 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 818ba016-88ff-4e10-881f-51e62d93df94 00:25:35.037 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:35.037 { 00:25:35.037 "name": "818ba016-88ff-4e10-881f-51e62d93df94", 00:25:35.037 "aliases": [ 00:25:35.037 "lvs/nvme0n1p0" 00:25:35.037 ], 00:25:35.037 "product_name": "Logical Volume", 00:25:35.037 "block_size": 4096, 00:25:35.037 "num_blocks": 26476544, 00:25:35.037 "uuid": "818ba016-88ff-4e10-881f-51e62d93df94", 00:25:35.037 "assigned_rate_limits": { 00:25:35.037 "rw_ios_per_sec": 0, 00:25:35.037 "rw_mbytes_per_sec": 0, 00:25:35.037 "r_mbytes_per_sec": 0, 00:25:35.037 "w_mbytes_per_sec": 0 00:25:35.037 }, 00:25:35.037 "claimed": false, 00:25:35.037 "zoned": false, 00:25:35.037 "supported_io_types": { 00:25:35.037 "read": true, 00:25:35.037 "write": true, 00:25:35.037 "unmap": true, 00:25:35.037 "flush": false, 00:25:35.037 "reset": true, 00:25:35.037 "nvme_admin": false, 00:25:35.037 "nvme_io": false, 00:25:35.037 "nvme_io_md": false, 00:25:35.037 "write_zeroes": true, 00:25:35.037 "zcopy": false, 00:25:35.037 "get_zone_info": false, 00:25:35.037 "zone_management": false, 00:25:35.037 "zone_append": false, 00:25:35.037 "compare": false, 00:25:35.037 "compare_and_write": false, 00:25:35.037 "abort": false, 00:25:35.037 "seek_hole": true, 00:25:35.037 "seek_data": true, 00:25:35.037 "copy": false, 00:25:35.037 "nvme_iov_md": false 00:25:35.037 }, 00:25:35.037 "driver_specific": { 00:25:35.037 "lvol": { 00:25:35.037 "lvol_store_uuid": "4e7f705c-e454-4534-921b-e3f2dab44134", 00:25:35.037 "base_bdev": "nvme0n1", 00:25:35.037 "thin_provision": true, 00:25:35.037 "num_allocated_clusters": 0, 00:25:35.037 "snapshot": false, 00:25:35.037 "clone": false, 00:25:35.037 "esnap_clone": false 00:25:35.037 } 00:25:35.037 } 00:25:35.037 } 00:25:35.037 ]' 00:25:35.037 20:52:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:35.038 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 818ba016-88ff-4e10-881f-51e62d93df94 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=818ba016-88ff-4e10-881f-51e62d93df94 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:35.299 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 818ba016-88ff-4e10-881f-51e62d93df94 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:35.562 { 00:25:35.562 "name": "818ba016-88ff-4e10-881f-51e62d93df94", 00:25:35.562 "aliases": [ 00:25:35.562 "lvs/nvme0n1p0" 00:25:35.562 ], 00:25:35.562 "product_name": "Logical Volume", 00:25:35.562 "block_size": 4096, 00:25:35.562 "num_blocks": 26476544, 00:25:35.562 "uuid": "818ba016-88ff-4e10-881f-51e62d93df94", 00:25:35.562 "assigned_rate_limits": { 00:25:35.562 "rw_ios_per_sec": 0, 00:25:35.562 "rw_mbytes_per_sec": 0, 00:25:35.562 "r_mbytes_per_sec": 0, 00:25:35.562 "w_mbytes_per_sec": 0 00:25:35.562 }, 00:25:35.562 "claimed": false, 00:25:35.562 "zoned": false, 00:25:35.562 "supported_io_types": { 00:25:35.562 "read": true, 00:25:35.562 "write": true, 00:25:35.562 "unmap": true, 00:25:35.562 "flush": false, 00:25:35.562 "reset": true, 00:25:35.562 "nvme_admin": false, 00:25:35.562 "nvme_io": false, 00:25:35.562 "nvme_io_md": false, 00:25:35.562 "write_zeroes": true, 00:25:35.562 "zcopy": false, 00:25:35.562 "get_zone_info": false, 00:25:35.562 "zone_management": false, 00:25:35.562 "zone_append": false, 00:25:35.562 "compare": false, 00:25:35.562 "compare_and_write": false, 00:25:35.562 "abort": false, 00:25:35.562 "seek_hole": true, 00:25:35.562 "seek_data": true, 00:25:35.562 "copy": false, 00:25:35.562 "nvme_iov_md": false 00:25:35.562 }, 00:25:35.562 "driver_specific": { 00:25:35.562 "lvol": { 00:25:35.562 "lvol_store_uuid": "4e7f705c-e454-4534-921b-e3f2dab44134", 00:25:35.562 "base_bdev": "nvme0n1", 00:25:35.562 "thin_provision": true, 00:25:35.562 "num_allocated_clusters": 0, 00:25:35.562 "snapshot": false, 00:25:35.562 "clone": false, 00:25:35.562 "esnap_clone": false 00:25:35.562 } 00:25:35.562 } 00:25:35.562 } 00:25:35.562 ]' 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:35.562 20:52:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 818ba016-88ff-4e10-881f-51e62d93df94 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=818ba016-88ff-4e10-881f-51e62d93df94 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:35.824 20:52:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 818ba016-88ff-4e10-881f-51e62d93df94 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:36.086 { 00:25:36.086 "name": "818ba016-88ff-4e10-881f-51e62d93df94", 00:25:36.086 "aliases": [ 00:25:36.086 "lvs/nvme0n1p0" 00:25:36.086 ], 00:25:36.086 "product_name": "Logical Volume", 00:25:36.086 "block_size": 4096, 00:25:36.086 "num_blocks": 26476544, 00:25:36.086 "uuid": "818ba016-88ff-4e10-881f-51e62d93df94", 00:25:36.086 "assigned_rate_limits": { 00:25:36.086 "rw_ios_per_sec": 0, 00:25:36.086 "rw_mbytes_per_sec": 0, 00:25:36.086 "r_mbytes_per_sec": 0, 00:25:36.086 "w_mbytes_per_sec": 0 00:25:36.086 }, 00:25:36.086 "claimed": false, 00:25:36.086 "zoned": false, 00:25:36.086 "supported_io_types": { 00:25:36.086 "read": true, 00:25:36.086 "write": true, 00:25:36.086 "unmap": true, 00:25:36.086 "flush": false, 00:25:36.086 "reset": true, 00:25:36.086 "nvme_admin": false, 00:25:36.086 "nvme_io": false, 00:25:36.086 "nvme_io_md": false, 00:25:36.086 "write_zeroes": true, 00:25:36.086 "zcopy": false, 00:25:36.086 "get_zone_info": false, 00:25:36.086 "zone_management": false, 00:25:36.086 "zone_append": false, 00:25:36.086 "compare": false, 00:25:36.086 "compare_and_write": false, 00:25:36.086 "abort": false, 00:25:36.086 "seek_hole": true, 00:25:36.086 "seek_data": true, 00:25:36.086 "copy": false, 00:25:36.086 "nvme_iov_md": false 00:25:36.086 }, 00:25:36.086 "driver_specific": { 00:25:36.086 "lvol": { 00:25:36.086 "lvol_store_uuid": "4e7f705c-e454-4534-921b-e3f2dab44134", 00:25:36.086 "base_bdev": "nvme0n1", 00:25:36.086 "thin_provision": true, 00:25:36.086 "num_allocated_clusters": 0, 00:25:36.086 "snapshot": false, 00:25:36.086 "clone": false, 00:25:36.086 "esnap_clone": false 00:25:36.086 } 00:25:36.086 } 00:25:36.086 } 00:25:36.086 ]' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 818ba016-88ff-4e10-881f-51e62d93df94 --l2p_dram_limit 10' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:36.086 20:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 818ba016-88ff-4e10-881f-51e62d93df94 --l2p_dram_limit 10 -c nvc0n1p0 00:25:36.349 [2024-12-06 20:52:53.292400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.292441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.349 [2024-12-06 20:52:53.292455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.349 [2024-12-06 20:52:53.292463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.292514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.292523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.349 [2024-12-06 20:52:53.292532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:36.349 [2024-12-06 20:52:53.292542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.292563] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.349 [2024-12-06 20:52:53.293196] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.349 [2024-12-06 20:52:53.293218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.293225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.349 [2024-12-06 20:52:53.293234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:25:36.349 [2024-12-06 20:52:53.293240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.293266] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c86062d0-7006-471d-8108-6d63e52b68bc 00:25:36.349 [2024-12-06 20:52:53.294220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.294239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:36.349 [2024-12-06 20:52:53.294246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:36.349 [2024-12-06 20:52:53.294256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.299028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.299144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.349 [2024-12-06 20:52:53.299157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:25:36.349 [2024-12-06 20:52:53.299164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.299235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.299243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.349 [2024-12-06 20:52:53.299250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:36.349 [2024-12-06 20:52:53.299260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.299295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.299304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.349 [2024-12-06 20:52:53.299312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:36.349 [2024-12-06 20:52:53.299319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.299335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.349 [2024-12-06 20:52:53.302184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.302277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.349 [2024-12-06 20:52:53.302293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:25:36.349 [2024-12-06 20:52:53.302301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.302331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.302337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.349 [2024-12-06 20:52:53.302345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.349 [2024-12-06 20:52:53.302351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.302370] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:36.349 [2024-12-06 20:52:53.302481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.349 [2024-12-06 20:52:53.302493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.349 [2024-12-06 20:52:53.302502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.349 [2024-12-06 20:52:53.302511] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.349 [2024-12-06 20:52:53.302518] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.349 [2024-12-06 20:52:53.302526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.349 [2024-12-06 20:52:53.302531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.349 [2024-12-06 20:52:53.302540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.349 [2024-12-06 20:52:53.302546] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.349 [2024-12-06 20:52:53.302554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.302564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.349 [2024-12-06 20:52:53.302571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:25:36.349 [2024-12-06 20:52:53.302576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.302644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.349 [2024-12-06 20:52:53.302650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.349 [2024-12-06 20:52:53.302657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:36.349 [2024-12-06 20:52:53.302662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.349 [2024-12-06 20:52:53.302740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.349 [2024-12-06 20:52:53.302747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.349 [2024-12-06 20:52:53.302754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.349 [2024-12-06 20:52:53.302760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.349 [2024-12-06 20:52:53.302768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.349 [2024-12-06 20:52:53.302773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.349 [2024-12-06 20:52:53.302779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.350 [2024-12-06 20:52:53.302784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.350 [2024-12-06 20:52:53.302791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.350 [2024-12-06 20:52:53.302804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.350 [2024-12-06 20:52:53.302809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.350 [2024-12-06 20:52:53.302816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.350 [2024-12-06 20:52:53.302821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.350 [2024-12-06 20:52:53.302827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.350 [2024-12-06 20:52:53.302832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.350 [2024-12-06 20:52:53.302846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.350 [2024-12-06 20:52:53.302852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.350 [2024-12-06 20:52:53.302864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.350 [2024-12-06 20:52:53.302875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.350 [2024-12-06 20:52:53.302880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.350 [2024-12-06 20:52:53.302901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.350 [2024-12-06 20:52:53.302907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.350 [2024-12-06 20:52:53.302918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.350 [2024-12-06 20:52:53.302923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.350 [2024-12-06 20:52:53.302935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.350 [2024-12-06 20:52:53.302942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.350 [2024-12-06 20:52:53.302954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.350 [2024-12-06 20:52:53.302959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.350 [2024-12-06 20:52:53.302967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.350 [2024-12-06 20:52:53.302972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.350 [2024-12-06 20:52:53.302978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.350 [2024-12-06 20:52:53.302983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.350 [2024-12-06 20:52:53.302989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.350 [2024-12-06 20:52:53.302994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.350 [2024-12-06 20:52:53.303000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.350 [2024-12-06 20:52:53.303005] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.350 [2024-12-06 20:52:53.303012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.350 [2024-12-06 20:52:53.303017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.350 [2024-12-06 20:52:53.303024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.350 [2024-12-06 20:52:53.303030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.350 [2024-12-06 20:52:53.303039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.350 [2024-12-06 20:52:53.303045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.350 [2024-12-06 20:52:53.303051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.350 [2024-12-06 20:52:53.303056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.350 [2024-12-06 20:52:53.303063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.350 [2024-12-06 20:52:53.303069] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.350 [2024-12-06 20:52:53.303079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.350 [2024-12-06 20:52:53.303092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.350 [2024-12-06 20:52:53.303098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.350 [2024-12-06 20:52:53.303105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.350 [2024-12-06 20:52:53.303110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.350 [2024-12-06 20:52:53.303117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.350 [2024-12-06 20:52:53.303122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.350 [2024-12-06 20:52:53.303130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.350 [2024-12-06 20:52:53.303135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.350 [2024-12-06 20:52:53.303143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.350 [2024-12-06 20:52:53.303173] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.350 [2024-12-06 20:52:53.303180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.350 [2024-12-06 20:52:53.303193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.350 [2024-12-06 20:52:53.303198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.350 [2024-12-06 20:52:53.303206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.350 [2024-12-06 20:52:53.303211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.350 [2024-12-06 20:52:53.303218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.350 [2024-12-06 20:52:53.303225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:25:36.350 [2024-12-06 20:52:53.303231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.350 [2024-12-06 20:52:53.303271] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:36.350 [2024-12-06 20:52:53.303283] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:40.561 [2024-12-06 20:52:57.353337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.353432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:40.561 [2024-12-06 20:52:57.353451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4050.050 ms 00:25:40.561 [2024-12-06 20:52:57.353463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.561 [2024-12-06 20:52:57.385816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.385884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.561 [2024-12-06 20:52:57.385923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.101 ms 00:25:40.561 [2024-12-06 20:52:57.385935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.561 [2024-12-06 20:52:57.386080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.386094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:40.561 [2024-12-06 20:52:57.386104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:40.561 [2024-12-06 20:52:57.386121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.561 [2024-12-06 20:52:57.421609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.421661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.561 [2024-12-06 20:52:57.421675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.453 ms 00:25:40.561 [2024-12-06 20:52:57.421686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.561 [2024-12-06 20:52:57.421722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.421737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.561 [2024-12-06 20:52:57.421746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:40.561 [2024-12-06 20:52:57.421765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.561 [2024-12-06 20:52:57.422369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.422399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.561 [2024-12-06 20:52:57.422410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:40.561 [2024-12-06 20:52:57.422421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.561 [2024-12-06 20:52:57.422539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.561 [2024-12-06 20:52:57.422558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.562 [2024-12-06 20:52:57.422570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:40.562 [2024-12-06 20:52:57.422584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.440007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.440232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.562 [2024-12-06 20:52:57.440254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.402 ms 00:25:40.562 [2024-12-06 20:52:57.440266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.469044] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:40.562 [2024-12-06 20:52:57.473090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.473137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:40.562 [2024-12-06 20:52:57.473153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.708 ms 00:25:40.562 [2024-12-06 20:52:57.473162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.576005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.576251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:40.562 [2024-12-06 20:52:57.576285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.794 ms 00:25:40.562 [2024-12-06 20:52:57.576295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.576490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.576507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:40.562 [2024-12-06 20:52:57.576523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:25:40.562 [2024-12-06 20:52:57.576531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.603623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.603818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:40.562 [2024-12-06 20:52:57.603847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.032 ms 00:25:40.562 [2024-12-06 20:52:57.603857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.629589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.629637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:40.562 [2024-12-06 20:52:57.629653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.669 ms 00:25:40.562 [2024-12-06 20:52:57.629662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.562 [2024-12-06 20:52:57.630341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.562 [2024-12-06 20:52:57.630363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:40.562 [2024-12-06 20:52:57.630376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:25:40.562 [2024-12-06 20:52:57.630388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.712603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.824 [2024-12-06 20:52:57.712657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:40.824 [2024-12-06 20:52:57.712678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.150 ms 00:25:40.824 [2024-12-06 20:52:57.712686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.740800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.824 [2024-12-06 20:52:57.741026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:40.824 [2024-12-06 20:52:57.741057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.013 ms 00:25:40.824 [2024-12-06 20:52:57.741066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.767319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.824 [2024-12-06 20:52:57.767369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:40.824 [2024-12-06 20:52:57.767384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.202 ms 00:25:40.824 [2024-12-06 20:52:57.767392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.793738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.824 [2024-12-06 20:52:57.793789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:40.824 [2024-12-06 20:52:57.793805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.290 ms 00:25:40.824 [2024-12-06 20:52:57.793814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.793872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.824 [2024-12-06 20:52:57.793883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:40.824 [2024-12-06 20:52:57.793915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:40.824 [2024-12-06 20:52:57.793924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.794032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.824 [2024-12-06 20:52:57.794046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:40.824 [2024-12-06 20:52:57.794057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:40.824 [2024-12-06 20:52:57.794066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.824 [2024-12-06 20:52:57.795317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4502.383 ms, result 0 00:25:40.824 { 00:25:40.824 "name": "ftl0", 00:25:40.824 "uuid": "c86062d0-7006-471d-8108-6d63e52b68bc" 00:25:40.824 } 00:25:40.824 20:52:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:40.824 20:52:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:41.087 20:52:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:41.087 20:52:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:41.087 20:52:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:41.378 /dev/nbd0 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:41.378 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:41.379 1+0 records in 00:25:41.379 1+0 records out 00:25:41.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839847 s, 4.9 MB/s 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:41.379 20:52:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:41.379 [2024-12-06 20:52:58.384259] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:25:41.379 [2024-12-06 20:52:58.384403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80037 ] 00:25:41.641 [2024-12-06 20:52:58.547680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.641 [2024-12-06 20:52:58.667482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.028  [2024-12-06T20:53:01.105Z] Copying: 190/1024 [MB] (190 MBps) [2024-12-06T20:53:02.048Z] Copying: 383/1024 [MB] (193 MBps) [2024-12-06T20:53:02.991Z] Copying: 605/1024 [MB] (222 MBps) [2024-12-06T20:53:03.562Z] Copying: 861/1024 [MB] (256 MBps) [2024-12-06T20:53:04.185Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:25:47.052 00:25:47.052 20:53:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:49.593 20:53:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:49.593 [2024-12-06 20:53:06.189943] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:25:49.593 [2024-12-06 20:53:06.190178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80124 ] 00:25:49.593 [2024-12-06 20:53:06.345867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.593 [2024-12-06 20:53:06.421354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.530  [2024-12-06T20:53:08.603Z] Copying: 33/1024 [MB] (33 MBps) [2024-12-06T20:53:09.985Z] Copying: 68/1024 [MB] (34 MBps) [2024-12-06T20:53:10.929Z] Copying: 102/1024 [MB] (33 MBps) [2024-12-06T20:53:11.874Z] Copying: 137/1024 [MB] (35 MBps) [2024-12-06T20:53:12.819Z] Copying: 167/1024 [MB] (30 MBps) [2024-12-06T20:53:13.762Z] Copying: 200/1024 [MB] (32 MBps) [2024-12-06T20:53:14.707Z] Copying: 230/1024 [MB] (29 MBps) [2024-12-06T20:53:15.646Z] Copying: 264/1024 [MB] (34 MBps) [2024-12-06T20:53:17.141Z] Copying: 299/1024 [MB] (35 MBps) [2024-12-06T20:53:17.735Z] Copying: 333/1024 [MB] (33 MBps) [2024-12-06T20:53:18.680Z] Copying: 363/1024 [MB] (30 MBps) [2024-12-06T20:53:19.623Z] Copying: 396/1024 [MB] (32 MBps) [2024-12-06T20:53:21.010Z] Copying: 431/1024 [MB] (35 MBps) [2024-12-06T20:53:21.952Z] Copying: 467/1024 [MB] (35 MBps) [2024-12-06T20:53:22.891Z] Copying: 502/1024 [MB] (35 MBps) [2024-12-06T20:53:23.829Z] Copying: 533/1024 [MB] (30 MBps) [2024-12-06T20:53:24.767Z] Copying: 563/1024 [MB] (30 MBps) [2024-12-06T20:53:25.706Z] Copying: 593/1024 [MB] (30 MBps) [2024-12-06T20:53:26.646Z] Copying: 624/1024 [MB] (30 MBps) [2024-12-06T20:53:28.030Z] Copying: 656/1024 [MB] (32 MBps) [2024-12-06T20:53:28.602Z] Copying: 692/1024 [MB] (35 MBps) [2024-12-06T20:53:29.989Z] Copying: 723/1024 [MB] (31 MBps) [2024-12-06T20:53:30.930Z] Copying: 756/1024 [MB] (32 MBps) [2024-12-06T20:53:31.934Z] Copying: 783/1024 [MB] (27 MBps) [2024-12-06T20:53:32.891Z] Copying: 814/1024 [MB] (30 MBps) [2024-12-06T20:53:33.835Z] Copying: 841/1024 [MB] (27 MBps) [2024-12-06T20:53:34.776Z] Copying: 870/1024 [MB] (29 MBps) [2024-12-06T20:53:35.719Z] Copying: 901/1024 [MB] (30 MBps) [2024-12-06T20:53:36.662Z] Copying: 936/1024 [MB] (35 MBps) [2024-12-06T20:53:37.605Z] Copying: 972/1024 [MB] (35 MBps) [2024-12-06T20:53:38.549Z] Copying: 1004/1024 [MB] (32 MBps) [2024-12-06T20:53:38.809Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:26:21.676 00:26:21.937 20:53:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:21.937 20:53:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:21.937 20:53:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:22.199 [2024-12-06 20:53:39.205770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.205933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:22.199 [2024-12-06 20:53:39.205952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:22.199 [2024-12-06 20:53:39.205960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.205986] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:22.199 [2024-12-06 20:53:39.208091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.208128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:22.199 [2024-12-06 20:53:39.208138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:26:22.199 [2024-12-06 20:53:39.208144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.209755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.209854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:22.199 [2024-12-06 20:53:39.209869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:26:22.199 [2024-12-06 20:53:39.209875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.222937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.222964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:22.199 [2024-12-06 20:53:39.222973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.033 ms 00:26:22.199 [2024-12-06 20:53:39.222979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.227743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.227765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:22.199 [2024-12-06 20:53:39.227775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.738 ms 00:26:22.199 [2024-12-06 20:53:39.227782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.246250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.246275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:22.199 [2024-12-06 20:53:39.246285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.414 ms 00:26:22.199 [2024-12-06 20:53:39.246291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.258387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.258414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:22.199 [2024-12-06 20:53:39.258427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.064 ms 00:26:22.199 [2024-12-06 20:53:39.258434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.258540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.258548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:22.199 [2024-12-06 20:53:39.258556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:22.199 [2024-12-06 20:53:39.258562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.276204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.276229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:22.199 [2024-12-06 20:53:39.276238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.628 ms 00:26:22.199 [2024-12-06 20:53:39.276244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.293673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.293696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:22.199 [2024-12-06 20:53:39.293705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.400 ms 00:26:22.199 [2024-12-06 20:53:39.293710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.310385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.310485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:22.199 [2024-12-06 20:53:39.310500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.645 ms 00:26:22.199 [2024-12-06 20:53:39.310506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.327083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.199 [2024-12-06 20:53:39.327107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:22.199 [2024-12-06 20:53:39.327116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.524 ms 00:26:22.199 [2024-12-06 20:53:39.327122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.199 [2024-12-06 20:53:39.327150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:22.199 [2024-12-06 20:53:39.327161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:22.199 [2024-12-06 20:53:39.327169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:22.199 [2024-12-06 20:53:39.327176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:22.199 [2024-12-06 20:53:39.327183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:22.200 [2024-12-06 20:53:39.327781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:22.201 [2024-12-06 20:53:39.327833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:22.201 [2024-12-06 20:53:39.327840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c86062d0-7006-471d-8108-6d63e52b68bc 00:26:22.201 [2024-12-06 20:53:39.327846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:22.201 [2024-12-06 20:53:39.327853] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:22.201 [2024-12-06 20:53:39.327860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:22.201 [2024-12-06 20:53:39.327867] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:22.201 [2024-12-06 20:53:39.327872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:22.201 [2024-12-06 20:53:39.327879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:22.201 [2024-12-06 20:53:39.327885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:22.201 [2024-12-06 20:53:39.327905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:22.201 [2024-12-06 20:53:39.327910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:22.201 [2024-12-06 20:53:39.327917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.201 [2024-12-06 20:53:39.327922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:22.201 [2024-12-06 20:53:39.327930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:26:22.201 [2024-12-06 20:53:39.327936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.462 [2024-12-06 20:53:39.337312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.462 [2024-12-06 20:53:39.337407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:22.462 [2024-12-06 20:53:39.337420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.353 ms 00:26:22.462 [2024-12-06 20:53:39.337426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.462 [2024-12-06 20:53:39.337690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.462 [2024-12-06 20:53:39.337697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:22.463 [2024-12-06 20:53:39.337705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:26:22.463 [2024-12-06 20:53:39.337711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.370437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.370465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:22.463 [2024-12-06 20:53:39.370475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.370480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.370525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.370532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:22.463 [2024-12-06 20:53:39.370539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.370545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.370620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.370629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:22.463 [2024-12-06 20:53:39.370637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.370642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.370658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.370664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:22.463 [2024-12-06 20:53:39.370671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.370677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.429290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.429320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:22.463 [2024-12-06 20:53:39.429330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.429336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:22.463 [2024-12-06 20:53:39.477592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.477598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:22.463 [2024-12-06 20:53:39.477668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.477674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:22.463 [2024-12-06 20:53:39.477737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.477743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:22.463 [2024-12-06 20:53:39.477825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.477833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:22.463 [2024-12-06 20:53:39.477874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.477880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:22.463 [2024-12-06 20:53:39.477945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.477953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.477988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.463 [2024-12-06 20:53:39.477996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:22.463 [2024-12-06 20:53:39.478003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.463 [2024-12-06 20:53:39.478009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.463 [2024-12-06 20:53:39.478112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.314 ms, result 0 00:26:22.463 true 00:26:22.463 20:53:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79884 00:26:22.463 20:53:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79884 00:26:22.463 20:53:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:22.463 [2024-12-06 20:53:39.567531] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:26:22.463 [2024-12-06 20:53:39.567649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80471 ] 00:26:22.724 [2024-12-06 20:53:39.725424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.724 [2024-12-06 20:53:39.802040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.109  [2024-12-06T20:53:42.184Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-06T20:53:43.127Z] Copying: 510/1024 [MB] (256 MBps) [2024-12-06T20:53:44.065Z] Copying: 764/1024 [MB] (254 MBps) [2024-12-06T20:53:44.065Z] Copying: 1015/1024 [MB] (251 MBps) [2024-12-06T20:53:44.634Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:26:27.501 00:26:27.501 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79884 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:27.501 20:53:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:27.761 [2024-12-06 20:53:44.644704] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:26:27.761 [2024-12-06 20:53:44.644794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80530 ] 00:26:27.761 [2024-12-06 20:53:44.794072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.761 [2024-12-06 20:53:44.868812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.021 [2024-12-06 20:53:45.078830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:28.021 [2024-12-06 20:53:45.078882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:28.021 [2024-12-06 20:53:45.141472] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:28.021 [2024-12-06 20:53:45.141748] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:28.021 [2024-12-06 20:53:45.141952] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:28.282 [2024-12-06 20:53:45.310537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.310578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:28.282 [2024-12-06 20:53:45.310591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.282 [2024-12-06 20:53:45.310601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.310647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.310657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.282 [2024-12-06 20:53:45.310665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:28.282 [2024-12-06 20:53:45.310672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.310694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:28.282 [2024-12-06 20:53:45.311795] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:28.282 [2024-12-06 20:53:45.311837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.311847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.282 [2024-12-06 20:53:45.311856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:26:28.282 [2024-12-06 20:53:45.311863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.312946] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:28.282 [2024-12-06 20:53:45.325640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.325672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:28.282 [2024-12-06 20:53:45.325683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.695 ms 00:26:28.282 [2024-12-06 20:53:45.325691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.325743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.325752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:28.282 [2024-12-06 20:53:45.325760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:28.282 [2024-12-06 20:53:45.325767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.330673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.330703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.282 [2024-12-06 20:53:45.330712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.850 ms 00:26:28.282 [2024-12-06 20:53:45.330719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.330791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.330800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.282 [2024-12-06 20:53:45.330808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:28.282 [2024-12-06 20:53:45.330814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.330854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.330863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:28.282 [2024-12-06 20:53:45.330870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:28.282 [2024-12-06 20:53:45.330877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.330916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:28.282 [2024-12-06 20:53:45.334216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.334242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.282 [2024-12-06 20:53:45.334251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:26:28.282 [2024-12-06 20:53:45.334258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.334287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.334295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:28.282 [2024-12-06 20:53:45.334302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:28.282 [2024-12-06 20:53:45.334309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.334330] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:28.282 [2024-12-06 20:53:45.334350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:28.282 [2024-12-06 20:53:45.334384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:28.282 [2024-12-06 20:53:45.334397] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:28.282 [2024-12-06 20:53:45.334498] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:28.282 [2024-12-06 20:53:45.334508] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:28.282 [2024-12-06 20:53:45.334518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:28.282 [2024-12-06 20:53:45.334530] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:28.282 [2024-12-06 20:53:45.334539] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:28.282 [2024-12-06 20:53:45.334546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:28.282 [2024-12-06 20:53:45.334554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:28.282 [2024-12-06 20:53:45.334561] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:28.282 [2024-12-06 20:53:45.334568] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:28.282 [2024-12-06 20:53:45.334575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.334582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:28.282 [2024-12-06 20:53:45.334589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:26:28.282 [2024-12-06 20:53:45.334596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.334678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.282 [2024-12-06 20:53:45.334688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:28.282 [2024-12-06 20:53:45.334695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:28.282 [2024-12-06 20:53:45.334702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.282 [2024-12-06 20:53:45.334800] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:28.282 [2024-12-06 20:53:45.334809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:28.282 [2024-12-06 20:53:45.334817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:28.282 [2024-12-06 20:53:45.334825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.282 [2024-12-06 20:53:45.334832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:28.282 [2024-12-06 20:53:45.334838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:28.282 [2024-12-06 20:53:45.334845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:28.282 [2024-12-06 20:53:45.334852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:28.282 [2024-12-06 20:53:45.334859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:28.282 [2024-12-06 20:53:45.334870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:28.282 [2024-12-06 20:53:45.334877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:28.283 [2024-12-06 20:53:45.334883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:28.283 [2024-12-06 20:53:45.334901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:28.283 [2024-12-06 20:53:45.334909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:28.283 [2024-12-06 20:53:45.334916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:28.283 [2024-12-06 20:53:45.334923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.283 [2024-12-06 20:53:45.334929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:28.283 [2024-12-06 20:53:45.334936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:28.283 [2024-12-06 20:53:45.334943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.283 [2024-12-06 20:53:45.334949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:28.283 [2024-12-06 20:53:45.334956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:28.283 [2024-12-06 20:53:45.334962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.283 [2024-12-06 20:53:45.334968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:28.283 [2024-12-06 20:53:45.334975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:28.283 [2024-12-06 20:53:45.334981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.283 [2024-12-06 20:53:45.334988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:28.283 [2024-12-06 20:53:45.334994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:28.283 [2024-12-06 20:53:45.335000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.283 [2024-12-06 20:53:45.335006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:28.283 [2024-12-06 20:53:45.335013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:28.283 [2024-12-06 20:53:45.335020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.283 [2024-12-06 20:53:45.335026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:28.283 [2024-12-06 20:53:45.335033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:28.283 [2024-12-06 20:53:45.335040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:28.283 [2024-12-06 20:53:45.335046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:28.283 [2024-12-06 20:53:45.335052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:28.283 [2024-12-06 20:53:45.335059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:28.283 [2024-12-06 20:53:45.335065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:28.283 [2024-12-06 20:53:45.335072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:28.283 [2024-12-06 20:53:45.335078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.283 [2024-12-06 20:53:45.335085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:28.283 [2024-12-06 20:53:45.335091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:28.283 [2024-12-06 20:53:45.335097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.283 [2024-12-06 20:53:45.335104] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:28.283 [2024-12-06 20:53:45.335111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:28.283 [2024-12-06 20:53:45.335121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:28.283 [2024-12-06 20:53:45.335128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.283 [2024-12-06 20:53:45.335136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:28.283 [2024-12-06 20:53:45.335142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:28.283 [2024-12-06 20:53:45.335149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:28.283 [2024-12-06 20:53:45.335155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:28.283 [2024-12-06 20:53:45.335161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:28.283 [2024-12-06 20:53:45.335168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:28.283 [2024-12-06 20:53:45.335176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:28.283 [2024-12-06 20:53:45.335185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:28.283 [2024-12-06 20:53:45.335200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:28.283 [2024-12-06 20:53:45.335207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:28.283 [2024-12-06 20:53:45.335214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:28.283 [2024-12-06 20:53:45.335221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:28.283 [2024-12-06 20:53:45.335228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:28.283 [2024-12-06 20:53:45.335235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:28.283 [2024-12-06 20:53:45.335243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:28.283 [2024-12-06 20:53:45.335249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:28.283 [2024-12-06 20:53:45.335256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:28.283 [2024-12-06 20:53:45.335291] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:28.283 [2024-12-06 20:53:45.335299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:28.283 [2024-12-06 20:53:45.335314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:28.283 [2024-12-06 20:53:45.335321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:28.283 [2024-12-06 20:53:45.335329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:28.283 [2024-12-06 20:53:45.335336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.335342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:28.283 [2024-12-06 20:53:45.335350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:26:28.283 [2024-12-06 20:53:45.335357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.283 [2024-12-06 20:53:45.361375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.361500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.283 [2024-12-06 20:53:45.361550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.964 ms 00:26:28.283 [2024-12-06 20:53:45.361572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.283 [2024-12-06 20:53:45.361676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.361698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:28.283 [2024-12-06 20:53:45.361716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:28.283 [2024-12-06 20:53:45.361735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.283 [2024-12-06 20:53:45.400464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.400627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.283 [2024-12-06 20:53:45.400690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.627 ms 00:26:28.283 [2024-12-06 20:53:45.400714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.283 [2024-12-06 20:53:45.400777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.400801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.283 [2024-12-06 20:53:45.400820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.283 [2024-12-06 20:53:45.400839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.283 [2024-12-06 20:53:45.401241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.401332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.283 [2024-12-06 20:53:45.401377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:26:28.283 [2024-12-06 20:53:45.401403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.283 [2024-12-06 20:53:45.401545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.283 [2024-12-06 20:53:45.401616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.283 [2024-12-06 20:53:45.401662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:26:28.283 [2024-12-06 20:53:45.401683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.414820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.414951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.543 [2024-12-06 20:53:45.415001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.106 ms 00:26:28.543 [2024-12-06 20:53:45.415023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.427924] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:28.543 [2024-12-06 20:53:45.428048] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:28.543 [2024-12-06 20:53:45.428122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.428605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:28.543 [2024-12-06 20:53:45.428656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.970 ms 00:26:28.543 [2024-12-06 20:53:45.428706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.453095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.453214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:28.543 [2024-12-06 20:53:45.453268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.318 ms 00:26:28.543 [2024-12-06 20:53:45.453291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.464909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.465030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:28.543 [2024-12-06 20:53:45.465079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.565 ms 00:26:28.543 [2024-12-06 20:53:45.465100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.477419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.477570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:28.543 [2024-12-06 20:53:45.477628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.047 ms 00:26:28.543 [2024-12-06 20:53:45.477651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.478295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.478381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:28.543 [2024-12-06 20:53:45.478435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:26:28.543 [2024-12-06 20:53:45.478477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.535111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.535264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:28.543 [2024-12-06 20:53:45.535323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.597 ms 00:26:28.543 [2024-12-06 20:53:45.535346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.545989] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:28.543 [2024-12-06 20:53:45.548762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.548870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:28.543 [2024-12-06 20:53:45.548931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.359 ms 00:26:28.543 [2024-12-06 20:53:45.548960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.549079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-12-06 20:53:45.549152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:28.543 [2024-12-06 20:53:45.549176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:28.543 [2024-12-06 20:53:45.549196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-12-06 20:53:45.549308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.544 [2024-12-06 20:53:45.549365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:28.544 [2024-12-06 20:53:45.549376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:28.544 [2024-12-06 20:53:45.549384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.544 [2024-12-06 20:53:45.549414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.544 [2024-12-06 20:53:45.549423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:28.544 [2024-12-06 20:53:45.549432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:28.544 [2024-12-06 20:53:45.549439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.544 [2024-12-06 20:53:45.549471] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:28.544 [2024-12-06 20:53:45.549481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.544 [2024-12-06 20:53:45.549489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:28.544 [2024-12-06 20:53:45.549497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:28.544 [2024-12-06 20:53:45.549508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.544 [2024-12-06 20:53:45.573930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.544 [2024-12-06 20:53:45.574057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:28.544 [2024-12-06 20:53:45.574109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.402 ms 00:26:28.544 [2024-12-06 20:53:45.574132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.544 [2024-12-06 20:53:45.574266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.544 [2024-12-06 20:53:45.574313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:28.544 [2024-12-06 20:53:45.574334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:28.544 [2024-12-06 20:53:45.574352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.544 [2024-12-06 20:53:45.575415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.451 ms, result 0 00:26:29.483  [2024-12-06T20:53:48.019Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-06T20:53:48.593Z] Copying: 26/1024 [MB] (10 MBps) [2024-12-06T20:53:49.996Z] Copying: 42/1024 [MB] (16 MBps) [2024-12-06T20:53:50.940Z] Copying: 55/1024 [MB] (12 MBps) [2024-12-06T20:53:51.885Z] Copying: 71/1024 [MB] (16 MBps) [2024-12-06T20:53:52.828Z] Copying: 86/1024 [MB] (14 MBps) [2024-12-06T20:53:53.771Z] Copying: 97/1024 [MB] (11 MBps) [2024-12-06T20:53:54.714Z] Copying: 116/1024 [MB] (18 MBps) [2024-12-06T20:53:55.657Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-06T20:53:56.599Z] Copying: 175/1024 [MB] (35 MBps) [2024-12-06T20:53:57.987Z] Copying: 197/1024 [MB] (22 MBps) [2024-12-06T20:53:58.932Z] Copying: 218/1024 [MB] (21 MBps) [2024-12-06T20:53:59.877Z] Copying: 243/1024 [MB] (24 MBps) [2024-12-06T20:54:00.818Z] Copying: 261/1024 [MB] (18 MBps) [2024-12-06T20:54:01.764Z] Copying: 279/1024 [MB] (17 MBps) [2024-12-06T20:54:02.709Z] Copying: 293/1024 [MB] (14 MBps) [2024-12-06T20:54:03.669Z] Copying: 315/1024 [MB] (21 MBps) [2024-12-06T20:54:04.634Z] Copying: 332/1024 [MB] (17 MBps) [2024-12-06T20:54:06.019Z] Copying: 349360/1048576 [kB] (9248 kBps) [2024-12-06T20:54:06.959Z] Copying: 355/1024 [MB] (14 MBps) [2024-12-06T20:54:07.903Z] Copying: 373/1024 [MB] (18 MBps) [2024-12-06T20:54:08.842Z] Copying: 392/1024 [MB] (18 MBps) [2024-12-06T20:54:09.782Z] Copying: 413/1024 [MB] (20 MBps) [2024-12-06T20:54:10.728Z] Copying: 429/1024 [MB] (16 MBps) [2024-12-06T20:54:11.674Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-06T20:54:12.619Z] Copying: 451/1024 [MB] (10 MBps) [2024-12-06T20:54:14.007Z] Copying: 462/1024 [MB] (10 MBps) [2024-12-06T20:54:14.947Z] Copying: 473/1024 [MB] (10 MBps) [2024-12-06T20:54:15.889Z] Copying: 483/1024 [MB] (10 MBps) [2024-12-06T20:54:16.830Z] Copying: 493/1024 [MB] (10 MBps) [2024-12-06T20:54:17.771Z] Copying: 504/1024 [MB] (10 MBps) [2024-12-06T20:54:18.717Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-06T20:54:19.697Z] Copying: 537240/1048576 [kB] (10024 kBps) [2024-12-06T20:54:20.637Z] Copying: 535/1024 [MB] (10 MBps) [2024-12-06T20:54:22.026Z] Copying: 556/1024 [MB] (21 MBps) [2024-12-06T20:54:22.598Z] Copying: 570/1024 [MB] (13 MBps) [2024-12-06T20:54:23.985Z] Copying: 583/1024 [MB] (12 MBps) [2024-12-06T20:54:24.930Z] Copying: 595/1024 [MB] (12 MBps) [2024-12-06T20:54:25.873Z] Copying: 609/1024 [MB] (13 MBps) [2024-12-06T20:54:26.838Z] Copying: 619/1024 [MB] (10 MBps) [2024-12-06T20:54:27.779Z] Copying: 634/1024 [MB] (14 MBps) [2024-12-06T20:54:28.720Z] Copying: 647/1024 [MB] (13 MBps) [2024-12-06T20:54:29.660Z] Copying: 660/1024 [MB] (12 MBps) [2024-12-06T20:54:30.597Z] Copying: 672/1024 [MB] (12 MBps) [2024-12-06T20:54:31.978Z] Copying: 691/1024 [MB] (18 MBps) [2024-12-06T20:54:32.920Z] Copying: 709/1024 [MB] (18 MBps) [2024-12-06T20:54:33.863Z] Copying: 724/1024 [MB] (14 MBps) [2024-12-06T20:54:34.826Z] Copying: 734/1024 [MB] (10 MBps) [2024-12-06T20:54:35.790Z] Copying: 744/1024 [MB] (10 MBps) [2024-12-06T20:54:36.731Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-06T20:54:37.674Z] Copying: 794/1024 [MB] (39 MBps) [2024-12-06T20:54:38.618Z] Copying: 811/1024 [MB] (16 MBps) [2024-12-06T20:54:40.004Z] Copying: 827/1024 [MB] (16 MBps) [2024-12-06T20:54:40.957Z] Copying: 838/1024 [MB] (10 MBps) [2024-12-06T20:54:41.899Z] Copying: 856/1024 [MB] (17 MBps) [2024-12-06T20:54:42.841Z] Copying: 871/1024 [MB] (15 MBps) [2024-12-06T20:54:43.780Z] Copying: 887/1024 [MB] (15 MBps) [2024-12-06T20:54:44.718Z] Copying: 905/1024 [MB] (18 MBps) [2024-12-06T20:54:45.657Z] Copying: 917/1024 [MB] (11 MBps) [2024-12-06T20:54:46.596Z] Copying: 935/1024 [MB] (18 MBps) [2024-12-06T20:54:47.974Z] Copying: 952/1024 [MB] (16 MBps) [2024-12-06T20:54:48.917Z] Copying: 963/1024 [MB] (11 MBps) [2024-12-06T20:54:49.859Z] Copying: 983/1024 [MB] (20 MBps) [2024-12-06T20:54:50.798Z] Copying: 1000/1024 [MB] (16 MBps) [2024-12-06T20:54:51.752Z] Copying: 1015/1024 [MB] (14 MBps) [2024-12-06T20:54:52.695Z] Copying: 1047804/1048576 [kB] (8308 kBps) [2024-12-06T20:54:52.695Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 20:54:52.329882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.329973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:35.562 [2024-12-06 20:54:52.329991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:35.562 [2024-12-06 20:54:52.330001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.562 [2024-12-06 20:54:52.333410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:35.562 [2024-12-06 20:54:52.337339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.337381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:35.562 [2024-12-06 20:54:52.337393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:27:35.562 [2024-12-06 20:54:52.337410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.562 [2024-12-06 20:54:52.350292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.350339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:35.562 [2024-12-06 20:54:52.350352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.847 ms 00:27:35.562 [2024-12-06 20:54:52.350361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.562 [2024-12-06 20:54:52.373695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.373757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:35.562 [2024-12-06 20:54:52.373770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.316 ms 00:27:35.562 [2024-12-06 20:54:52.373779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.562 [2024-12-06 20:54:52.379935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.379969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:35.562 [2024-12-06 20:54:52.379981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:27:35.562 [2024-12-06 20:54:52.379991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.562 [2024-12-06 20:54:52.406213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.406259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:35.562 [2024-12-06 20:54:52.406272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.177 ms 00:27:35.562 [2024-12-06 20:54:52.406280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.562 [2024-12-06 20:54:52.422258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.562 [2024-12-06 20:54:52.422299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:35.563 [2024-12-06 20:54:52.422313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.930 ms 00:27:35.563 [2024-12-06 20:54:52.422321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.824 [2024-12-06 20:54:52.704133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.824 [2024-12-06 20:54:52.704180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:35.824 [2024-12-06 20:54:52.704199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 281.759 ms 00:27:35.824 [2024-12-06 20:54:52.704208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.824 [2024-12-06 20:54:52.729651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.824 [2024-12-06 20:54:52.729691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:35.824 [2024-12-06 20:54:52.729702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.427 ms 00:27:35.824 [2024-12-06 20:54:52.729722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.824 [2024-12-06 20:54:52.755196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.824 [2024-12-06 20:54:52.755235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:35.824 [2024-12-06 20:54:52.755246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.430 ms 00:27:35.824 [2024-12-06 20:54:52.755253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.824 [2024-12-06 20:54:52.780197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.824 [2024-12-06 20:54:52.780238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:35.824 [2024-12-06 20:54:52.780250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.900 ms 00:27:35.824 [2024-12-06 20:54:52.780257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.824 [2024-12-06 20:54:52.804805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.825 [2024-12-06 20:54:52.804843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:35.825 [2024-12-06 20:54:52.804855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.478 ms 00:27:35.825 [2024-12-06 20:54:52.804863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.825 [2024-12-06 20:54:52.804921] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:35.825 [2024-12-06 20:54:52.804937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:27:35.825 [2024-12-06 20:54:52.804948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.804956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.804965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.804973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.804981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.804990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.804998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:35.825 [2024-12-06 20:54:52.805598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:35.826 [2024-12-06 20:54:52.805720] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:35.826 [2024-12-06 20:54:52.805728] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c86062d0-7006-471d-8108-6d63e52b68bc 00:27:35.826 [2024-12-06 20:54:52.805750] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:27:35.826 [2024-12-06 20:54:52.805757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:27:35.826 [2024-12-06 20:54:52.805765] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:27:35.826 [2024-12-06 20:54:52.805773] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:27:35.826 [2024-12-06 20:54:52.805780] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:35.826 [2024-12-06 20:54:52.805788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:35.826 [2024-12-06 20:54:52.805795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:35.826 [2024-12-06 20:54:52.805802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:35.826 [2024-12-06 20:54:52.805810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:35.826 [2024-12-06 20:54:52.805817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.826 [2024-12-06 20:54:52.805826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:35.826 [2024-12-06 20:54:52.805834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:27:35.826 [2024-12-06 20:54:52.805842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.819227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.826 [2024-12-06 20:54:52.819266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:35.826 [2024-12-06 20:54:52.819278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.366 ms 00:27:35.826 [2024-12-06 20:54:52.819286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.819684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.826 [2024-12-06 20:54:52.819694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:35.826 [2024-12-06 20:54:52.819710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:27:35.826 [2024-12-06 20:54:52.819718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.856114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.826 [2024-12-06 20:54:52.856155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.826 [2024-12-06 20:54:52.856167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.826 [2024-12-06 20:54:52.856176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.856242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.826 [2024-12-06 20:54:52.856252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.826 [2024-12-06 20:54:52.856267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.826 [2024-12-06 20:54:52.856276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.856351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.826 [2024-12-06 20:54:52.856363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.826 [2024-12-06 20:54:52.856373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.826 [2024-12-06 20:54:52.856382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.856398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.826 [2024-12-06 20:54:52.856407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.826 [2024-12-06 20:54:52.856415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.826 [2024-12-06 20:54:52.856424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.826 [2024-12-06 20:54:52.940621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.826 [2024-12-06 20:54:52.940670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.826 [2024-12-06 20:54:52.940683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.826 [2024-12-06 20:54:52.940693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.085 [2024-12-06 20:54:53.010295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:36.085 [2024-12-06 20:54:53.010385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:36.085 [2024-12-06 20:54:53.010468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:36.085 [2024-12-06 20:54:53.010601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:36.085 [2024-12-06 20:54:53.010656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:36.085 [2024-12-06 20:54:53.010725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.085 [2024-12-06 20:54:53.010789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:36.085 [2024-12-06 20:54:53.010798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.085 [2024-12-06 20:54:53.010806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.085 [2024-12-06 20:54:53.010971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 681.503 ms, result 0 00:27:37.473 00:27:37.473 00:27:37.473 20:54:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:40.023 20:54:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:40.023 [2024-12-06 20:54:56.595174] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:27:40.023 [2024-12-06 20:54:56.595262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81253 ] 00:27:40.023 [2024-12-06 20:54:56.750873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.023 [2024-12-06 20:54:56.850480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.023 [2024-12-06 20:54:57.134116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.023 [2024-12-06 20:54:57.134206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.287 [2024-12-06 20:54:57.293769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.293839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.287 [2024-12-06 20:54:57.293856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:40.287 [2024-12-06 20:54:57.293865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.293937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.293952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.287 [2024-12-06 20:54:57.293961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:40.287 [2024-12-06 20:54:57.293969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.293991] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.287 [2024-12-06 20:54:57.295089] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.287 [2024-12-06 20:54:57.295151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.295163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.287 [2024-12-06 20:54:57.295174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:27:40.287 [2024-12-06 20:54:57.295182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.296883] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:40.287 [2024-12-06 20:54:57.311296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.311348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:40.287 [2024-12-06 20:54:57.311362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.415 ms 00:27:40.287 [2024-12-06 20:54:57.311370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.311451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.311463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:40.287 [2024-12-06 20:54:57.311473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:40.287 [2024-12-06 20:54:57.311480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.319442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.319486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.287 [2024-12-06 20:54:57.319497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.877 ms 00:27:40.287 [2024-12-06 20:54:57.319511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.319591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.319600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.287 [2024-12-06 20:54:57.319609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:40.287 [2024-12-06 20:54:57.319617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.319661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.319671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.287 [2024-12-06 20:54:57.319680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:40.287 [2024-12-06 20:54:57.319689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.319716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:40.287 [2024-12-06 20:54:57.323673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.323712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.287 [2024-12-06 20:54:57.323726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:27:40.287 [2024-12-06 20:54:57.323734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.323772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.323781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.287 [2024-12-06 20:54:57.323790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:40.287 [2024-12-06 20:54:57.323799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.323850] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:40.287 [2024-12-06 20:54:57.323876] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:40.287 [2024-12-06 20:54:57.323928] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:40.287 [2024-12-06 20:54:57.323949] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:40.287 [2024-12-06 20:54:57.324056] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:40.287 [2024-12-06 20:54:57.324068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.287 [2024-12-06 20:54:57.324080] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:40.287 [2024-12-06 20:54:57.324102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324121] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:40.287 [2024-12-06 20:54:57.324129] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.287 [2024-12-06 20:54:57.324140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:40.287 [2024-12-06 20:54:57.324148] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:40.287 [2024-12-06 20:54:57.324157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.324164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.287 [2024-12-06 20:54:57.324172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:27:40.287 [2024-12-06 20:54:57.324180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.324264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.287 [2024-12-06 20:54:57.324274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.287 [2024-12-06 20:54:57.324282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:40.287 [2024-12-06 20:54:57.324289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.287 [2024-12-06 20:54:57.324396] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.287 [2024-12-06 20:54:57.324408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.287 [2024-12-06 20:54:57.324417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.287 [2024-12-06 20:54:57.324441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.287 [2024-12-06 20:54:57.324466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.287 [2024-12-06 20:54:57.324479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.287 [2024-12-06 20:54:57.324486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:40.287 [2024-12-06 20:54:57.324495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.287 [2024-12-06 20:54:57.324510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.287 [2024-12-06 20:54:57.324517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:40.287 [2024-12-06 20:54:57.324524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.287 [2024-12-06 20:54:57.324538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.287 [2024-12-06 20:54:57.324559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.287 [2024-12-06 20:54:57.324582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.287 [2024-12-06 20:54:57.324602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:40.287 [2024-12-06 20:54:57.324609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.287 [2024-12-06 20:54:57.324616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.287 [2024-12-06 20:54:57.324624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:40.288 [2024-12-06 20:54:57.324631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.288 [2024-12-06 20:54:57.324637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.288 [2024-12-06 20:54:57.324644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:40.288 [2024-12-06 20:54:57.324651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.288 [2024-12-06 20:54:57.324658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.288 [2024-12-06 20:54:57.324665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:40.288 [2024-12-06 20:54:57.324672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.288 [2024-12-06 20:54:57.324680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:40.288 [2024-12-06 20:54:57.324687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:40.288 [2024-12-06 20:54:57.324693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.288 [2024-12-06 20:54:57.324700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:40.288 [2024-12-06 20:54:57.324707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:40.288 [2024-12-06 20:54:57.324714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.288 [2024-12-06 20:54:57.324721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.288 [2024-12-06 20:54:57.324732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.288 [2024-12-06 20:54:57.324741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.288 [2024-12-06 20:54:57.324749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.288 [2024-12-06 20:54:57.324757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.288 [2024-12-06 20:54:57.324764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.288 [2024-12-06 20:54:57.324771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.288 [2024-12-06 20:54:57.324778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.288 [2024-12-06 20:54:57.324785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.288 [2024-12-06 20:54:57.324792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.288 [2024-12-06 20:54:57.324802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.288 [2024-12-06 20:54:57.324811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:40.288 [2024-12-06 20:54:57.324831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:40.288 [2024-12-06 20:54:57.324839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:40.288 [2024-12-06 20:54:57.324847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:40.288 [2024-12-06 20:54:57.324854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:40.288 [2024-12-06 20:54:57.324861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:40.288 [2024-12-06 20:54:57.324868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:40.288 [2024-12-06 20:54:57.324876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:40.288 [2024-12-06 20:54:57.324884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:40.288 [2024-12-06 20:54:57.324915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:40.288 [2024-12-06 20:54:57.324954] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.288 [2024-12-06 20:54:57.324962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.288 [2024-12-06 20:54:57.324978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.288 [2024-12-06 20:54:57.324986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.288 [2024-12-06 20:54:57.324994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.288 [2024-12-06 20:54:57.325002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.325011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.288 [2024-12-06 20:54:57.325020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:27:40.288 [2024-12-06 20:54:57.325028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.288 [2024-12-06 20:54:57.356906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.356954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.288 [2024-12-06 20:54:57.356966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.810 ms 00:27:40.288 [2024-12-06 20:54:57.356978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.288 [2024-12-06 20:54:57.357072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.357081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:40.288 [2024-12-06 20:54:57.357091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:40.288 [2024-12-06 20:54:57.357099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.288 [2024-12-06 20:54:57.410994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.411195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.288 [2024-12-06 20:54:57.411219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.829 ms 00:27:40.288 [2024-12-06 20:54:57.411229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.288 [2024-12-06 20:54:57.411284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.411295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.288 [2024-12-06 20:54:57.411310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:40.288 [2024-12-06 20:54:57.411318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.288 [2024-12-06 20:54:57.411953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.411976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.288 [2024-12-06 20:54:57.411987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:27:40.288 [2024-12-06 20:54:57.411995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.288 [2024-12-06 20:54:57.412169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.288 [2024-12-06 20:54:57.412180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.288 [2024-12-06 20:54:57.412196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:27:40.288 [2024-12-06 20:54:57.412204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.427739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.427791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.551 [2024-12-06 20:54:57.427802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.513 ms 00:27:40.551 [2024-12-06 20:54:57.427811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.442309] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:40.551 [2024-12-06 20:54:57.442495] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:40.551 [2024-12-06 20:54:57.442515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.442524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:40.551 [2024-12-06 20:54:57.442535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.565 ms 00:27:40.551 [2024-12-06 20:54:57.442543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.468440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.468491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:40.551 [2024-12-06 20:54:57.468507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.592 ms 00:27:40.551 [2024-12-06 20:54:57.468517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.481065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.481108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:40.551 [2024-12-06 20:54:57.481120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.491 ms 00:27:40.551 [2024-12-06 20:54:57.481128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.493338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.493379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:40.551 [2024-12-06 20:54:57.493393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.160 ms 00:27:40.551 [2024-12-06 20:54:57.493401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.494096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.494121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:40.551 [2024-12-06 20:54:57.494136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:27:40.551 [2024-12-06 20:54:57.494144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.556717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.556769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:40.551 [2024-12-06 20:54:57.556790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.551 ms 00:27:40.551 [2024-12-06 20:54:57.556799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.568117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:40.551 [2024-12-06 20:54:57.571108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.571146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:40.551 [2024-12-06 20:54:57.571158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.251 ms 00:27:40.551 [2024-12-06 20:54:57.571167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.571254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.571265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:40.551 [2024-12-06 20:54:57.571278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:40.551 [2024-12-06 20:54:57.571287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.573031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.573073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:40.551 [2024-12-06 20:54:57.573085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:27:40.551 [2024-12-06 20:54:57.573093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.573123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.573133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:40.551 [2024-12-06 20:54:57.573143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:40.551 [2024-12-06 20:54:57.573151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.573198] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:40.551 [2024-12-06 20:54:57.573209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.573217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:40.551 [2024-12-06 20:54:57.573226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:40.551 [2024-12-06 20:54:57.573234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.599349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.599394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:40.551 [2024-12-06 20:54:57.599412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.096 ms 00:27:40.551 [2024-12-06 20:54:57.599422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.599506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.551 [2024-12-06 20:54:57.599517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:40.551 [2024-12-06 20:54:57.599526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:40.551 [2024-12-06 20:54:57.599535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.551 [2024-12-06 20:54:57.600784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.521 ms, result 0 00:27:41.941  [2024-12-06T20:55:00.019Z] Copying: 980/1048576 [kB] (980 kBps) [2024-12-06T20:55:00.965Z] Copying: 4072/1048576 [kB] (3092 kBps) [2024-12-06T20:55:01.910Z] Copying: 18/1024 [MB] (14 MBps) [2024-12-06T20:55:02.856Z] Copying: 46/1024 [MB] (27 MBps) [2024-12-06T20:55:03.798Z] Copying: 74/1024 [MB] (28 MBps) [2024-12-06T20:55:05.178Z] Copying: 104/1024 [MB] (30 MBps) [2024-12-06T20:55:06.120Z] Copying: 135/1024 [MB] (30 MBps) [2024-12-06T20:55:07.064Z] Copying: 152/1024 [MB] (16 MBps) [2024-12-06T20:55:08.031Z] Copying: 168/1024 [MB] (16 MBps) [2024-12-06T20:55:08.972Z] Copying: 185/1024 [MB] (17 MBps) [2024-12-06T20:55:09.917Z] Copying: 211/1024 [MB] (25 MBps) [2024-12-06T20:55:10.862Z] Copying: 227/1024 [MB] (16 MBps) [2024-12-06T20:55:11.805Z] Copying: 243/1024 [MB] (16 MBps) [2024-12-06T20:55:13.191Z] Copying: 266/1024 [MB] (22 MBps) [2024-12-06T20:55:14.135Z] Copying: 290/1024 [MB] (23 MBps) [2024-12-06T20:55:15.083Z] Copying: 319/1024 [MB] (29 MBps) [2024-12-06T20:55:16.022Z] Copying: 350/1024 [MB] (30 MBps) [2024-12-06T20:55:16.965Z] Copying: 376/1024 [MB] (26 MBps) [2024-12-06T20:55:17.906Z] Copying: 406/1024 [MB] (29 MBps) [2024-12-06T20:55:18.851Z] Copying: 430/1024 [MB] (24 MBps) [2024-12-06T20:55:19.798Z] Copying: 455/1024 [MB] (24 MBps) [2024-12-06T20:55:21.184Z] Copying: 480/1024 [MB] (24 MBps) [2024-12-06T20:55:22.126Z] Copying: 505/1024 [MB] (24 MBps) [2024-12-06T20:55:23.068Z] Copying: 535/1024 [MB] (30 MBps) [2024-12-06T20:55:24.037Z] Copying: 565/1024 [MB] (30 MBps) [2024-12-06T20:55:24.980Z] Copying: 595/1024 [MB] (29 MBps) [2024-12-06T20:55:25.924Z] Copying: 620/1024 [MB] (25 MBps) [2024-12-06T20:55:26.866Z] Copying: 650/1024 [MB] (30 MBps) [2024-12-06T20:55:27.809Z] Copying: 680/1024 [MB] (29 MBps) [2024-12-06T20:55:29.194Z] Copying: 717/1024 [MB] (37 MBps) [2024-12-06T20:55:30.134Z] Copying: 745/1024 [MB] (28 MBps) [2024-12-06T20:55:31.071Z] Copying: 776/1024 [MB] (30 MBps) [2024-12-06T20:55:32.013Z] Copying: 807/1024 [MB] (31 MBps) [2024-12-06T20:55:32.958Z] Copying: 829/1024 [MB] (21 MBps) [2024-12-06T20:55:33.902Z] Copying: 858/1024 [MB] (28 MBps) [2024-12-06T20:55:34.846Z] Copying: 886/1024 [MB] (28 MBps) [2024-12-06T20:55:35.788Z] Copying: 919/1024 [MB] (32 MBps) [2024-12-06T20:55:37.172Z] Copying: 943/1024 [MB] (24 MBps) [2024-12-06T20:55:38.115Z] Copying: 960/1024 [MB] (17 MBps) [2024-12-06T20:55:38.690Z] Copying: 998/1024 [MB] (38 MBps) [2024-12-06T20:55:38.983Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 20:55:38.699045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.850 [2024-12-06 20:55:38.699166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:21.850 [2024-12-06 20:55:38.699197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:21.850 [2024-12-06 20:55:38.699216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.850 [2024-12-06 20:55:38.699263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.850 [2024-12-06 20:55:38.705231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.850 [2024-12-06 20:55:38.705276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:21.850 [2024-12-06 20:55:38.705288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:28:21.850 [2024-12-06 20:55:38.705296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.850 [2024-12-06 20:55:38.705530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.850 [2024-12-06 20:55:38.705548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:21.850 [2024-12-06 20:55:38.705558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:28:21.850 [2024-12-06 20:55:38.705567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.850 [2024-12-06 20:55:38.721117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.850 [2024-12-06 20:55:38.721169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:21.850 [2024-12-06 20:55:38.721183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.532 ms 00:28:21.850 [2024-12-06 20:55:38.721192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.850 [2024-12-06 20:55:38.727482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.850 [2024-12-06 20:55:38.727524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:21.850 [2024-12-06 20:55:38.727544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:28:21.851 [2024-12-06 20:55:38.727553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.754010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.754059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:21.851 [2024-12-06 20:55:38.754072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.412 ms 00:28:21.851 [2024-12-06 20:55:38.754080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.769253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.769301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:21.851 [2024-12-06 20:55:38.769314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.128 ms 00:28:21.851 [2024-12-06 20:55:38.769322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.773500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.773547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:21.851 [2024-12-06 20:55:38.773559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:28:21.851 [2024-12-06 20:55:38.773575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.799004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.799066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.851 [2024-12-06 20:55:38.799077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.413 ms 00:28:21.851 [2024-12-06 20:55:38.799084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.824237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.824284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:21.851 [2024-12-06 20:55:38.824297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.108 ms 00:28:21.851 [2024-12-06 20:55:38.824304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.848948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.848995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:21.851 [2024-12-06 20:55:38.849006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.600 ms 00:28:21.851 [2024-12-06 20:55:38.849013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.873364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.851 [2024-12-06 20:55:38.873408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:21.851 [2024-12-06 20:55:38.873419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.281 ms 00:28:21.851 [2024-12-06 20:55:38.873427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.851 [2024-12-06 20:55:38.873469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:21.851 [2024-12-06 20:55:38.873485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.851 [2024-12-06 20:55:38.873496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:21.851 [2024-12-06 20:55:38.873505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:21.851 [2024-12-06 20:55:38.873999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:21.852 [2024-12-06 20:55:38.874302] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:21.852 [2024-12-06 20:55:38.874311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c86062d0-7006-471d-8108-6d63e52b68bc 00:28:21.852 [2024-12-06 20:55:38.874319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:21.852 [2024-12-06 20:55:38.874327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161472 00:28:21.852 [2024-12-06 20:55:38.874339] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159488 00:28:21.852 [2024-12-06 20:55:38.874347] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:28:21.852 [2024-12-06 20:55:38.874355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:21.852 [2024-12-06 20:55:38.874372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:21.852 [2024-12-06 20:55:38.874380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:21.852 [2024-12-06 20:55:38.874388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:21.852 [2024-12-06 20:55:38.874395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:21.852 [2024-12-06 20:55:38.874403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.852 [2024-12-06 20:55:38.874411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:21.852 [2024-12-06 20:55:38.874421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:28:21.852 [2024-12-06 20:55:38.874429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.852 [2024-12-06 20:55:38.888071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.852 [2024-12-06 20:55:38.888124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:21.852 [2024-12-06 20:55:38.888136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.622 ms 00:28:21.852 [2024-12-06 20:55:38.888145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.852 [2024-12-06 20:55:38.888548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.852 [2024-12-06 20:55:38.888560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:21.852 [2024-12-06 20:55:38.888569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:28:21.852 [2024-12-06 20:55:38.888577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.852 [2024-12-06 20:55:38.924829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.852 [2024-12-06 20:55:38.924881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.852 [2024-12-06 20:55:38.924917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.852 [2024-12-06 20:55:38.924926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.852 [2024-12-06 20:55:38.924989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.852 [2024-12-06 20:55:38.924998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.852 [2024-12-06 20:55:38.925007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.852 [2024-12-06 20:55:38.925015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.852 [2024-12-06 20:55:38.925105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.852 [2024-12-06 20:55:38.925116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.852 [2024-12-06 20:55:38.925124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.852 [2024-12-06 20:55:38.925132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.852 [2024-12-06 20:55:38.925149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.852 [2024-12-06 20:55:38.925158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.852 [2024-12-06 20:55:38.925166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.852 [2024-12-06 20:55:38.925173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.009675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.009733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.114 [2024-12-06 20:55:39.009745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.009754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.114 [2024-12-06 20:55:39.079334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.114 [2024-12-06 20:55:39.079425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.114 [2024-12-06 20:55:39.079512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.114 [2024-12-06 20:55:39.079641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:22.114 [2024-12-06 20:55:39.079700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.114 [2024-12-06 20:55:39.079771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.079824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.114 [2024-12-06 20:55:39.079835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.114 [2024-12-06 20:55:39.079845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.114 [2024-12-06 20:55:39.079853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.114 [2024-12-06 20:55:39.080014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.974 ms, result 0 00:28:22.690 00:28:22.690 00:28:22.952 20:55:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:24.870 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:24.870 20:55:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:24.870 [2024-12-06 20:55:41.935102] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:28:24.870 [2024-12-06 20:55:41.935240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81711 ] 00:28:25.137 [2024-12-06 20:55:42.090366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.137 [2024-12-06 20:55:42.187788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.399 [2024-12-06 20:55:42.467083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:25.399 [2024-12-06 20:55:42.467171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:25.663 [2024-12-06 20:55:42.627839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.627922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:25.663 [2024-12-06 20:55:42.627938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.663 [2024-12-06 20:55:42.627947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.628003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.628017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:25.663 [2024-12-06 20:55:42.628025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:25.663 [2024-12-06 20:55:42.628034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.628055] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:25.663 [2024-12-06 20:55:42.629035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:25.663 [2024-12-06 20:55:42.629089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.629100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:25.663 [2024-12-06 20:55:42.629111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:28:25.663 [2024-12-06 20:55:42.629120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.630832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:25.663 [2024-12-06 20:55:42.644828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.644898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:25.663 [2024-12-06 20:55:42.644911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.999 ms 00:28:25.663 [2024-12-06 20:55:42.644920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.644999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.645009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:25.663 [2024-12-06 20:55:42.645018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:25.663 [2024-12-06 20:55:42.645026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.652780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.652823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:25.663 [2024-12-06 20:55:42.652833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.677 ms 00:28:25.663 [2024-12-06 20:55:42.652847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.652942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.652953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:25.663 [2024-12-06 20:55:42.652962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:25.663 [2024-12-06 20:55:42.652970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.653012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.653022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:25.663 [2024-12-06 20:55:42.653031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:25.663 [2024-12-06 20:55:42.653039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.653065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:25.663 [2024-12-06 20:55:42.656980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.657020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:25.663 [2024-12-06 20:55:42.657034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:28:25.663 [2024-12-06 20:55:42.657041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.657081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.663 [2024-12-06 20:55:42.657090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:25.663 [2024-12-06 20:55:42.657099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:25.663 [2024-12-06 20:55:42.657107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.663 [2024-12-06 20:55:42.657157] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:25.663 [2024-12-06 20:55:42.657182] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:25.663 [2024-12-06 20:55:42.657218] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:25.664 [2024-12-06 20:55:42.657236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:25.664 [2024-12-06 20:55:42.657342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:25.664 [2024-12-06 20:55:42.657354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:25.664 [2024-12-06 20:55:42.657365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:25.664 [2024-12-06 20:55:42.657375] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657392] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:25.664 [2024-12-06 20:55:42.657400] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:25.664 [2024-12-06 20:55:42.657411] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:25.664 [2024-12-06 20:55:42.657419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:25.664 [2024-12-06 20:55:42.657428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.664 [2024-12-06 20:55:42.657436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:25.664 [2024-12-06 20:55:42.657444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:28:25.664 [2024-12-06 20:55:42.657451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.664 [2024-12-06 20:55:42.657534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.664 [2024-12-06 20:55:42.657552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:25.664 [2024-12-06 20:55:42.657561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:25.664 [2024-12-06 20:55:42.657569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.664 [2024-12-06 20:55:42.657674] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:25.664 [2024-12-06 20:55:42.657685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:25.664 [2024-12-06 20:55:42.657693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:25.664 [2024-12-06 20:55:42.657716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:25.664 [2024-12-06 20:55:42.657737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.664 [2024-12-06 20:55:42.657750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:25.664 [2024-12-06 20:55:42.657757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:25.664 [2024-12-06 20:55:42.657765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.664 [2024-12-06 20:55:42.657778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:25.664 [2024-12-06 20:55:42.657785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:25.664 [2024-12-06 20:55:42.657792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:25.664 [2024-12-06 20:55:42.657806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:25.664 [2024-12-06 20:55:42.657826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:25.664 [2024-12-06 20:55:42.657846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:25.664 [2024-12-06 20:55:42.657867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:25.664 [2024-12-06 20:55:42.657901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.664 [2024-12-06 20:55:42.657915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:25.664 [2024-12-06 20:55:42.657921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.664 [2024-12-06 20:55:42.657936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:25.664 [2024-12-06 20:55:42.657943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:25.664 [2024-12-06 20:55:42.657950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.664 [2024-12-06 20:55:42.657957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:25.664 [2024-12-06 20:55:42.657964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:25.664 [2024-12-06 20:55:42.657971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:25.664 [2024-12-06 20:55:42.657984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:25.664 [2024-12-06 20:55:42.657990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.657997] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:25.664 [2024-12-06 20:55:42.658008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:25.664 [2024-12-06 20:55:42.658016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.664 [2024-12-06 20:55:42.658023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.664 [2024-12-06 20:55:42.658032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:25.664 [2024-12-06 20:55:42.658039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:25.664 [2024-12-06 20:55:42.658047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:25.664 [2024-12-06 20:55:42.658054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:25.664 [2024-12-06 20:55:42.658061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:25.664 [2024-12-06 20:55:42.658069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:25.664 [2024-12-06 20:55:42.658077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:25.664 [2024-12-06 20:55:42.658088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:25.664 [2024-12-06 20:55:42.658107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:25.664 [2024-12-06 20:55:42.658114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:25.664 [2024-12-06 20:55:42.658121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:25.664 [2024-12-06 20:55:42.658128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:25.664 [2024-12-06 20:55:42.658136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:25.664 [2024-12-06 20:55:42.658143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:25.664 [2024-12-06 20:55:42.658151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:25.664 [2024-12-06 20:55:42.658158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:25.664 [2024-12-06 20:55:42.658166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:25.664 [2024-12-06 20:55:42.658203] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:25.664 [2024-12-06 20:55:42.658211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:25.664 [2024-12-06 20:55:42.658226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:25.664 [2024-12-06 20:55:42.658233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:25.664 [2024-12-06 20:55:42.658240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:25.664 [2024-12-06 20:55:42.658249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.664 [2024-12-06 20:55:42.658257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:25.664 [2024-12-06 20:55:42.658265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:28:25.665 [2024-12-06 20:55:42.658272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.689683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.689737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:25.665 [2024-12-06 20:55:42.689749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.364 ms 00:28:25.665 [2024-12-06 20:55:42.689761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.689852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.689861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:25.665 [2024-12-06 20:55:42.689870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:25.665 [2024-12-06 20:55:42.689879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.730762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.730822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:25.665 [2024-12-06 20:55:42.730835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.806 ms 00:28:25.665 [2024-12-06 20:55:42.730844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.730905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.730916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:25.665 [2024-12-06 20:55:42.730929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:25.665 [2024-12-06 20:55:42.730937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.731507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.731547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:25.665 [2024-12-06 20:55:42.731558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:28:25.665 [2024-12-06 20:55:42.731566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.731717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.731728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:25.665 [2024-12-06 20:55:42.731742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:28:25.665 [2024-12-06 20:55:42.731750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.747212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.747258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:25.665 [2024-12-06 20:55:42.747269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.442 ms 00:28:25.665 [2024-12-06 20:55:42.747277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.761457] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:25.665 [2024-12-06 20:55:42.761508] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:25.665 [2024-12-06 20:55:42.761521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.761529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:25.665 [2024-12-06 20:55:42.761539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.138 ms 00:28:25.665 [2024-12-06 20:55:42.761547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.665 [2024-12-06 20:55:42.787130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.665 [2024-12-06 20:55:42.787180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:25.665 [2024-12-06 20:55:42.787191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.532 ms 00:28:25.665 [2024-12-06 20:55:42.787201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.800068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.800130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:25.927 [2024-12-06 20:55:42.800142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.799 ms 00:28:25.927 [2024-12-06 20:55:42.800150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.812747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.812796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:25.927 [2024-12-06 20:55:42.812808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.552 ms 00:28:25.927 [2024-12-06 20:55:42.812816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.813462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.813492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:25.927 [2024-12-06 20:55:42.813506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:28:25.927 [2024-12-06 20:55:42.813515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.877187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.877252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:25.927 [2024-12-06 20:55:42.877274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.652 ms 00:28:25.927 [2024-12-06 20:55:42.877283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.888615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:25.927 [2024-12-06 20:55:42.891811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.891852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:25.927 [2024-12-06 20:55:42.891864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.468 ms 00:28:25.927 [2024-12-06 20:55:42.891873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.891978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.891990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:25.927 [2024-12-06 20:55:42.892003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:25.927 [2024-12-06 20:55:42.892012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.892843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.892904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:25.927 [2024-12-06 20:55:42.892916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:28:25.927 [2024-12-06 20:55:42.892925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.892956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.892965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:25.927 [2024-12-06 20:55:42.892974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:25.927 [2024-12-06 20:55:42.892983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.927 [2024-12-06 20:55:42.893028] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:25.927 [2024-12-06 20:55:42.893040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.927 [2024-12-06 20:55:42.893049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:25.927 [2024-12-06 20:55:42.893059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:25.928 [2024-12-06 20:55:42.893067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.928 [2024-12-06 20:55:42.918833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.928 [2024-12-06 20:55:42.918883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:25.928 [2024-12-06 20:55:42.918917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.746 ms 00:28:25.928 [2024-12-06 20:55:42.918926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.928 [2024-12-06 20:55:42.919008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.928 [2024-12-06 20:55:42.919018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:25.928 [2024-12-06 20:55:42.919028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:25.928 [2024-12-06 20:55:42.919038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.928 [2024-12-06 20:55:42.920278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.918 ms, result 0 00:28:27.316  [2024-12-06T20:55:45.393Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T20:55:46.335Z] Copying: 36/1024 [MB] (16 MBps) [2024-12-06T20:55:47.277Z] Copying: 58/1024 [MB] (22 MBps) [2024-12-06T20:55:48.218Z] Copying: 79/1024 [MB] (20 MBps) [2024-12-06T20:55:49.159Z] Copying: 96/1024 [MB] (17 MBps) [2024-12-06T20:55:50.104Z] Copying: 108/1024 [MB] (12 MBps) [2024-12-06T20:55:51.489Z] Copying: 129/1024 [MB] (20 MBps) [2024-12-06T20:55:52.435Z] Copying: 139/1024 [MB] (10 MBps) [2024-12-06T20:55:53.379Z] Copying: 150/1024 [MB] (10 MBps) [2024-12-06T20:55:54.324Z] Copying: 172/1024 [MB] (21 MBps) [2024-12-06T20:55:55.335Z] Copying: 185/1024 [MB] (13 MBps) [2024-12-06T20:55:56.275Z] Copying: 203/1024 [MB] (18 MBps) [2024-12-06T20:55:57.216Z] Copying: 219/1024 [MB] (16 MBps) [2024-12-06T20:55:58.157Z] Copying: 237/1024 [MB] (17 MBps) [2024-12-06T20:55:59.100Z] Copying: 254/1024 [MB] (16 MBps) [2024-12-06T20:56:00.483Z] Copying: 267/1024 [MB] (12 MBps) [2024-12-06T20:56:01.426Z] Copying: 282/1024 [MB] (14 MBps) [2024-12-06T20:56:02.369Z] Copying: 297/1024 [MB] (15 MBps) [2024-12-06T20:56:03.310Z] Copying: 315/1024 [MB] (18 MBps) [2024-12-06T20:56:04.252Z] Copying: 331/1024 [MB] (15 MBps) [2024-12-06T20:56:05.196Z] Copying: 343/1024 [MB] (11 MBps) [2024-12-06T20:56:06.137Z] Copying: 361/1024 [MB] (18 MBps) [2024-12-06T20:56:07.519Z] Copying: 381/1024 [MB] (20 MBps) [2024-12-06T20:56:08.463Z] Copying: 403/1024 [MB] (21 MBps) [2024-12-06T20:56:09.406Z] Copying: 419/1024 [MB] (15 MBps) [2024-12-06T20:56:10.350Z] Copying: 432/1024 [MB] (13 MBps) [2024-12-06T20:56:11.355Z] Copying: 442/1024 [MB] (10 MBps) [2024-12-06T20:56:12.298Z] Copying: 456/1024 [MB] (14 MBps) [2024-12-06T20:56:13.243Z] Copying: 473/1024 [MB] (16 MBps) [2024-12-06T20:56:14.191Z] Copying: 483/1024 [MB] (10 MBps) [2024-12-06T20:56:15.134Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-06T20:56:16.517Z] Copying: 504/1024 [MB] (10 MBps) [2024-12-06T20:56:17.458Z] Copying: 515/1024 [MB] (10 MBps) [2024-12-06T20:56:18.398Z] Copying: 525/1024 [MB] (10 MBps) [2024-12-06T20:56:19.370Z] Copying: 548/1024 [MB] (23 MBps) [2024-12-06T20:56:20.315Z] Copying: 565/1024 [MB] (16 MBps) [2024-12-06T20:56:21.258Z] Copying: 578/1024 [MB] (12 MBps) [2024-12-06T20:56:22.201Z] Copying: 597/1024 [MB] (19 MBps) [2024-12-06T20:56:23.147Z] Copying: 617/1024 [MB] (20 MBps) [2024-12-06T20:56:24.537Z] Copying: 633/1024 [MB] (15 MBps) [2024-12-06T20:56:25.111Z] Copying: 649/1024 [MB] (15 MBps) [2024-12-06T20:56:26.496Z] Copying: 663/1024 [MB] (14 MBps) [2024-12-06T20:56:27.128Z] Copying: 679/1024 [MB] (15 MBps) [2024-12-06T20:56:28.517Z] Copying: 692/1024 [MB] (13 MBps) [2024-12-06T20:56:29.464Z] Copying: 704/1024 [MB] (12 MBps) [2024-12-06T20:56:30.410Z] Copying: 716/1024 [MB] (12 MBps) [2024-12-06T20:56:31.355Z] Copying: 732/1024 [MB] (15 MBps) [2024-12-06T20:56:32.296Z] Copying: 749/1024 [MB] (16 MBps) [2024-12-06T20:56:33.237Z] Copying: 765/1024 [MB] (16 MBps) [2024-12-06T20:56:34.177Z] Copying: 781/1024 [MB] (15 MBps) [2024-12-06T20:56:35.116Z] Copying: 801/1024 [MB] (20 MBps) [2024-12-06T20:56:36.500Z] Copying: 828/1024 [MB] (26 MBps) [2024-12-06T20:56:37.443Z] Copying: 843/1024 [MB] (15 MBps) [2024-12-06T20:56:38.386Z] Copying: 855/1024 [MB] (12 MBps) [2024-12-06T20:56:39.328Z] Copying: 868/1024 [MB] (12 MBps) [2024-12-06T20:56:40.265Z] Copying: 882/1024 [MB] (13 MBps) [2024-12-06T20:56:41.204Z] Copying: 893/1024 [MB] (10 MBps) [2024-12-06T20:56:42.179Z] Copying: 904/1024 [MB] (10 MBps) [2024-12-06T20:56:43.119Z] Copying: 914/1024 [MB] (10 MBps) [2024-12-06T20:56:44.502Z] Copying: 926/1024 [MB] (11 MBps) [2024-12-06T20:56:45.442Z] Copying: 936/1024 [MB] (10 MBps) [2024-12-06T20:56:46.385Z] Copying: 950/1024 [MB] (13 MBps) [2024-12-06T20:56:47.328Z] Copying: 961/1024 [MB] (10 MBps) [2024-12-06T20:56:48.271Z] Copying: 972/1024 [MB] (10 MBps) [2024-12-06T20:56:49.209Z] Copying: 982/1024 [MB] (10 MBps) [2024-12-06T20:56:50.142Z] Copying: 994/1024 [MB] (11 MBps) [2024-12-06T20:56:51.517Z] Copying: 1005/1024 [MB] (11 MBps) [2024-12-06T20:56:51.517Z] Copying: 1020/1024 [MB] (14 MBps) [2024-12-06T20:56:51.517Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-06 20:56:51.376862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.384 [2024-12-06 20:56:51.376921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:34.384 [2024-12-06 20:56:51.376934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:34.384 [2024-12-06 20:56:51.376943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.384 [2024-12-06 20:56:51.376963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:34.384 [2024-12-06 20:56:51.379546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.384 [2024-12-06 20:56:51.379581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:34.384 [2024-12-06 20:56:51.379591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:29:34.384 [2024-12-06 20:56:51.379598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.384 [2024-12-06 20:56:51.379802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.384 [2024-12-06 20:56:51.379812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:34.384 [2024-12-06 20:56:51.379819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:29:34.384 [2024-12-06 20:56:51.379827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.384 [2024-12-06 20:56:51.383263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.383283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:34.385 [2024-12-06 20:56:51.383292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:29:34.385 [2024-12-06 20:56:51.383303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.389469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.389499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:34.385 [2024-12-06 20:56:51.389507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:29:34.385 [2024-12-06 20:56:51.389514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.414723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.414758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:34.385 [2024-12-06 20:56:51.414768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.163 ms 00:29:34.385 [2024-12-06 20:56:51.414775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.429359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.429391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:34.385 [2024-12-06 20:56:51.429402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.553 ms 00:29:34.385 [2024-12-06 20:56:51.429409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.433196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.433229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:34.385 [2024-12-06 20:56:51.433238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:29:34.385 [2024-12-06 20:56:51.433246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.456371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.456400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:34.385 [2024-12-06 20:56:51.456410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.112 ms 00:29:34.385 [2024-12-06 20:56:51.456417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.479403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.479433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:34.385 [2024-12-06 20:56:51.479443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.957 ms 00:29:34.385 [2024-12-06 20:56:51.479450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.385 [2024-12-06 20:56:51.501940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.385 [2024-12-06 20:56:51.501977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:34.385 [2024-12-06 20:56:51.501986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.460 ms 00:29:34.385 [2024-12-06 20:56:51.501993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.644 [2024-12-06 20:56:51.524495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.644 [2024-12-06 20:56:51.524524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:34.644 [2024-12-06 20:56:51.524534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:29:34.644 [2024-12-06 20:56:51.524541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.644 [2024-12-06 20:56:51.524570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:34.644 [2024-12-06 20:56:51.524588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:34.644 [2024-12-06 20:56:51.524600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:34.644 [2024-12-06 20:56:51.524608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:34.644 [2024-12-06 20:56:51.524724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.524993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:34.645 [2024-12-06 20:56:51.525332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:34.645 [2024-12-06 20:56:51.525339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c86062d0-7006-471d-8108-6d63e52b68bc 00:29:34.645 [2024-12-06 20:56:51.525347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:34.645 [2024-12-06 20:56:51.525354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:34.645 [2024-12-06 20:56:51.525361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:34.645 [2024-12-06 20:56:51.525369] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:34.645 [2024-12-06 20:56:51.525381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:34.645 [2024-12-06 20:56:51.525388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:34.645 [2024-12-06 20:56:51.525395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:34.646 [2024-12-06 20:56:51.525402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:34.646 [2024-12-06 20:56:51.525414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:34.646 [2024-12-06 20:56:51.525421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.646 [2024-12-06 20:56:51.525428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:34.646 [2024-12-06 20:56:51.525435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:29:34.646 [2024-12-06 20:56:51.525444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.537591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.646 [2024-12-06 20:56:51.537621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:34.646 [2024-12-06 20:56:51.537630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.132 ms 00:29:34.646 [2024-12-06 20:56:51.537637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.537987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.646 [2024-12-06 20:56:51.538005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:34.646 [2024-12-06 20:56:51.538013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:29:34.646 [2024-12-06 20:56:51.538019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.570513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.570545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:34.646 [2024-12-06 20:56:51.570554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.570561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.570605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.570616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:34.646 [2024-12-06 20:56:51.570623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.570630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.570675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.570684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:34.646 [2024-12-06 20:56:51.570691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.570698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.570712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.570720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:34.646 [2024-12-06 20:56:51.570730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.570737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.646652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.646687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:34.646 [2024-12-06 20:56:51.646697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.646704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.708874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.708920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:34.646 [2024-12-06 20:56:51.708931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.708938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.708998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.709007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:34.646 [2024-12-06 20:56:51.709015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.709022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.709053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.709062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:34.646 [2024-12-06 20:56:51.709069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.709080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.709162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.709171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:34.646 [2024-12-06 20:56:51.709179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.709186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.709217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.709225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:34.646 [2024-12-06 20:56:51.709233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.709240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.709276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.709284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:34.646 [2024-12-06 20:56:51.709291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.709299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.709337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.646 [2024-12-06 20:56:51.709353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:34.646 [2024-12-06 20:56:51.709361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.646 [2024-12-06 20:56:51.709370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.646 [2024-12-06 20:56:51.709474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.587 ms, result 0 00:29:35.581 00:29:35.581 00:29:35.581 20:56:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:37.478 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:37.478 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:37.478 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:37.478 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:37.478 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79884 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79884 ']' 00:29:37.737 Process with pid 79884 is not found 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79884 00:29:37.737 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79884) - No such process 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79884 is not found' 00:29:37.737 20:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:37.996 Remove shared memory files 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:37.996 00:29:37.996 real 4m6.087s 00:29:37.996 user 4m20.417s 00:29:37.996 sys 0m23.443s 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.996 ************************************ 00:29:37.996 END TEST ftl_dirty_shutdown 00:29:37.996 ************************************ 00:29:37.996 20:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.254 20:56:55 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:38.254 20:56:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.254 20:56:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.254 20:56:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:38.254 ************************************ 00:29:38.254 START TEST ftl_upgrade_shutdown 00:29:38.254 ************************************ 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:38.254 * Looking for test storage... 00:29:38.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:38.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.254 --rc genhtml_branch_coverage=1 00:29:38.254 --rc genhtml_function_coverage=1 00:29:38.254 --rc genhtml_legend=1 00:29:38.254 --rc geninfo_all_blocks=1 00:29:38.254 --rc geninfo_unexecuted_blocks=1 00:29:38.254 00:29:38.254 ' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:38.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.254 --rc genhtml_branch_coverage=1 00:29:38.254 --rc genhtml_function_coverage=1 00:29:38.254 --rc genhtml_legend=1 00:29:38.254 --rc geninfo_all_blocks=1 00:29:38.254 --rc geninfo_unexecuted_blocks=1 00:29:38.254 00:29:38.254 ' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:38.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.254 --rc genhtml_branch_coverage=1 00:29:38.254 --rc genhtml_function_coverage=1 00:29:38.254 --rc genhtml_legend=1 00:29:38.254 --rc geninfo_all_blocks=1 00:29:38.254 --rc geninfo_unexecuted_blocks=1 00:29:38.254 00:29:38.254 ' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:38.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.254 --rc genhtml_branch_coverage=1 00:29:38.254 --rc genhtml_function_coverage=1 00:29:38.254 --rc genhtml_legend=1 00:29:38.254 --rc geninfo_all_blocks=1 00:29:38.254 --rc geninfo_unexecuted_blocks=1 00:29:38.254 00:29:38.254 ' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:38.254 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82516 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82516 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82516 ']' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.255 20:56:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.512 [2024-12-06 20:56:55.426268] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:29:38.512 [2024-12-06 20:56:55.426389] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82516 ] 00:29:38.512 [2024-12-06 20:56:55.587420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.769 [2024-12-06 20:56:55.683584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:39.334 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:39.593 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:39.851 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:39.851 { 00:29:39.851 "name": "basen1", 00:29:39.851 "aliases": [ 00:29:39.851 "7406a123-5e7a-44b5-9ae3-4b6e0ceda4b5" 00:29:39.851 ], 00:29:39.851 "product_name": "NVMe disk", 00:29:39.851 "block_size": 4096, 00:29:39.851 "num_blocks": 1310720, 00:29:39.851 "uuid": "7406a123-5e7a-44b5-9ae3-4b6e0ceda4b5", 00:29:39.851 "numa_id": -1, 00:29:39.851 "assigned_rate_limits": { 00:29:39.851 "rw_ios_per_sec": 0, 00:29:39.851 "rw_mbytes_per_sec": 0, 00:29:39.851 "r_mbytes_per_sec": 0, 00:29:39.851 "w_mbytes_per_sec": 0 00:29:39.851 }, 00:29:39.851 "claimed": true, 00:29:39.851 "claim_type": "read_many_write_one", 00:29:39.851 "zoned": false, 00:29:39.851 "supported_io_types": { 00:29:39.851 "read": true, 00:29:39.851 "write": true, 00:29:39.852 "unmap": true, 00:29:39.852 "flush": true, 00:29:39.852 "reset": true, 00:29:39.852 "nvme_admin": true, 00:29:39.852 "nvme_io": true, 00:29:39.852 "nvme_io_md": false, 00:29:39.852 "write_zeroes": true, 00:29:39.852 "zcopy": false, 00:29:39.852 "get_zone_info": false, 00:29:39.852 "zone_management": false, 00:29:39.852 "zone_append": false, 00:29:39.852 "compare": true, 00:29:39.852 "compare_and_write": false, 00:29:39.852 "abort": true, 00:29:39.852 "seek_hole": false, 00:29:39.852 "seek_data": false, 00:29:39.852 "copy": true, 00:29:39.852 "nvme_iov_md": false 00:29:39.852 }, 00:29:39.852 "driver_specific": { 00:29:39.852 "nvme": [ 00:29:39.852 { 00:29:39.852 "pci_address": "0000:00:11.0", 00:29:39.852 "trid": { 00:29:39.852 "trtype": "PCIe", 00:29:39.852 "traddr": "0000:00:11.0" 00:29:39.852 }, 00:29:39.852 "ctrlr_data": { 00:29:39.852 "cntlid": 0, 00:29:39.852 "vendor_id": "0x1b36", 00:29:39.852 "model_number": "QEMU NVMe Ctrl", 00:29:39.852 "serial_number": "12341", 00:29:39.852 "firmware_revision": "8.0.0", 00:29:39.852 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:39.852 "oacs": { 00:29:39.852 "security": 0, 00:29:39.852 "format": 1, 00:29:39.852 "firmware": 0, 00:29:39.852 "ns_manage": 1 00:29:39.852 }, 00:29:39.852 "multi_ctrlr": false, 00:29:39.852 "ana_reporting": false 00:29:39.852 }, 00:29:39.852 "vs": { 00:29:39.852 "nvme_version": "1.4" 00:29:39.852 }, 00:29:39.852 "ns_data": { 00:29:39.852 "id": 1, 00:29:39.852 "can_share": false 00:29:39.852 } 00:29:39.852 } 00:29:39.852 ], 00:29:39.852 "mp_policy": "active_passive" 00:29:39.852 } 00:29:39.852 } 00:29:39.852 ]' 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:39.852 20:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:40.110 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=4e7f705c-e454-4534-921b-e3f2dab44134 00:29:40.110 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:40.110 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e7f705c-e454-4534-921b-e3f2dab44134 00:29:40.369 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c47c1300-75f9-488b-8e4d-69a466c7cca7 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c47c1300-75f9-488b-8e4d-69a466c7cca7 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=de4a5624-54f5-4abb-a3a7-be429b7593d1 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z de4a5624-54f5-4abb-a3a7-be429b7593d1 ]] 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 de4a5624-54f5-4abb-a3a7-be429b7593d1 5120 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=de4a5624-54f5-4abb-a3a7-be429b7593d1 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size de4a5624-54f5-4abb-a3a7-be429b7593d1 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=de4a5624-54f5-4abb-a3a7-be429b7593d1 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:40.628 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de4a5624-54f5-4abb-a3a7-be429b7593d1 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:40.886 { 00:29:40.886 "name": "de4a5624-54f5-4abb-a3a7-be429b7593d1", 00:29:40.886 "aliases": [ 00:29:40.886 "lvs/basen1p0" 00:29:40.886 ], 00:29:40.886 "product_name": "Logical Volume", 00:29:40.886 "block_size": 4096, 00:29:40.886 "num_blocks": 5242880, 00:29:40.886 "uuid": "de4a5624-54f5-4abb-a3a7-be429b7593d1", 00:29:40.886 "assigned_rate_limits": { 00:29:40.886 "rw_ios_per_sec": 0, 00:29:40.886 "rw_mbytes_per_sec": 0, 00:29:40.886 "r_mbytes_per_sec": 0, 00:29:40.886 "w_mbytes_per_sec": 0 00:29:40.886 }, 00:29:40.886 "claimed": false, 00:29:40.886 "zoned": false, 00:29:40.886 "supported_io_types": { 00:29:40.886 "read": true, 00:29:40.886 "write": true, 00:29:40.886 "unmap": true, 00:29:40.886 "flush": false, 00:29:40.886 "reset": true, 00:29:40.886 "nvme_admin": false, 00:29:40.886 "nvme_io": false, 00:29:40.886 "nvme_io_md": false, 00:29:40.886 "write_zeroes": true, 00:29:40.886 "zcopy": false, 00:29:40.886 "get_zone_info": false, 00:29:40.886 "zone_management": false, 00:29:40.886 "zone_append": false, 00:29:40.886 "compare": false, 00:29:40.886 "compare_and_write": false, 00:29:40.886 "abort": false, 00:29:40.886 "seek_hole": true, 00:29:40.886 "seek_data": true, 00:29:40.886 "copy": false, 00:29:40.886 "nvme_iov_md": false 00:29:40.886 }, 00:29:40.886 "driver_specific": { 00:29:40.886 "lvol": { 00:29:40.886 "lvol_store_uuid": "c47c1300-75f9-488b-8e4d-69a466c7cca7", 00:29:40.886 "base_bdev": "basen1", 00:29:40.886 "thin_provision": true, 00:29:40.886 "num_allocated_clusters": 0, 00:29:40.886 "snapshot": false, 00:29:40.886 "clone": false, 00:29:40.886 "esnap_clone": false 00:29:40.886 } 00:29:40.886 } 00:29:40.886 } 00:29:40.886 ]' 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:40.886 20:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:41.145 20:56:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:41.145 20:56:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:41.145 20:56:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:41.431 20:56:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:41.431 20:56:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:41.431 20:56:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d de4a5624-54f5-4abb-a3a7-be429b7593d1 -c cachen1p0 --l2p_dram_limit 2 00:29:41.709 [2024-12-06 20:56:58.618575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.618611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:41.709 [2024-12-06 20:56:58.618624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.709 [2024-12-06 20:56:58.618631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.618679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.618686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:41.709 [2024-12-06 20:56:58.618694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:41.709 [2024-12-06 20:56:58.618700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.618717] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:41.709 [2024-12-06 20:56:58.619295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:41.709 [2024-12-06 20:56:58.619312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.619318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:41.709 [2024-12-06 20:56:58.619327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.597 ms 00:29:41.709 [2024-12-06 20:56:58.619333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.619356] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 46369d51-32a1-4457-b2d5-a2bb657f137b 00:29:41.709 [2024-12-06 20:56:58.620313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.620330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:41.709 [2024-12-06 20:56:58.620338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:41.709 [2024-12-06 20:56:58.620345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.625061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.625090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:41.709 [2024-12-06 20:56:58.625097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.659 ms 00:29:41.709 [2024-12-06 20:56:58.625104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.625134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.625142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:41.709 [2024-12-06 20:56:58.625148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:41.709 [2024-12-06 20:56:58.625157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.625191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.625200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:41.709 [2024-12-06 20:56:58.625208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:41.709 [2024-12-06 20:56:58.625215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.625230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:41.709 [2024-12-06 20:56:58.628120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.628142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:41.709 [2024-12-06 20:56:58.628152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.892 ms 00:29:41.709 [2024-12-06 20:56:58.628158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.628180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.628186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:41.709 [2024-12-06 20:56:58.628194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.709 [2024-12-06 20:56:58.628199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.628213] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:41.709 [2024-12-06 20:56:58.628321] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:41.709 [2024-12-06 20:56:58.628333] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:41.709 [2024-12-06 20:56:58.628342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:41.709 [2024-12-06 20:56:58.628351] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:41.709 [2024-12-06 20:56:58.628358] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:41.709 [2024-12-06 20:56:58.628365] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:41.709 [2024-12-06 20:56:58.628371] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:41.709 [2024-12-06 20:56:58.628380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:41.709 [2024-12-06 20:56:58.628385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:41.709 [2024-12-06 20:56:58.628392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.628397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:41.709 [2024-12-06 20:56:58.628405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:29:41.709 [2024-12-06 20:56:58.628410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.628476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.709 [2024-12-06 20:56:58.628487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:41.709 [2024-12-06 20:56:58.628494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:41.709 [2024-12-06 20:56:58.628500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.709 [2024-12-06 20:56:58.628578] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:41.709 [2024-12-06 20:56:58.628585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:41.709 [2024-12-06 20:56:58.628592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:41.709 [2024-12-06 20:56:58.628599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.709 [2024-12-06 20:56:58.628606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:41.709 [2024-12-06 20:56:58.628611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:41.710 [2024-12-06 20:56:58.628623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:41.710 [2024-12-06 20:56:58.628629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:41.710 [2024-12-06 20:56:58.628634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:41.710 [2024-12-06 20:56:58.628647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:41.710 [2024-12-06 20:56:58.628653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:41.710 [2024-12-06 20:56:58.628667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:41.710 [2024-12-06 20:56:58.628672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:41.710 [2024-12-06 20:56:58.628685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:41.710 [2024-12-06 20:56:58.628692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:41.710 [2024-12-06 20:56:58.628703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:41.710 [2024-12-06 20:56:58.628708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:41.710 [2024-12-06 20:56:58.628720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:41.710 [2024-12-06 20:56:58.628726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:41.710 [2024-12-06 20:56:58.628737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:41.710 [2024-12-06 20:56:58.628742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:41.710 [2024-12-06 20:56:58.628753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:41.710 [2024-12-06 20:56:58.628759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:41.710 [2024-12-06 20:56:58.628772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:41.710 [2024-12-06 20:56:58.628777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:41.710 [2024-12-06 20:56:58.628789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:41.710 [2024-12-06 20:56:58.628807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:41.710 [2024-12-06 20:56:58.628824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:41.710 [2024-12-06 20:56:58.628829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628834] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:41.710 [2024-12-06 20:56:58.628841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:41.710 [2024-12-06 20:56:58.628846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.710 [2024-12-06 20:56:58.628860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:41.710 [2024-12-06 20:56:58.628868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:41.710 [2024-12-06 20:56:58.628873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:41.710 [2024-12-06 20:56:58.628880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:41.710 [2024-12-06 20:56:58.628885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:41.710 [2024-12-06 20:56:58.628904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:41.710 [2024-12-06 20:56:58.628910] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:41.710 [2024-12-06 20:56:58.628920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:41.710 [2024-12-06 20:56:58.628934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:41.710 [2024-12-06 20:56:58.628951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:41.710 [2024-12-06 20:56:58.628958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:41.710 [2024-12-06 20:56:58.628964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:41.710 [2024-12-06 20:56:58.628972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.628997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.629003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.629010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:41.710 [2024-12-06 20:56:58.629015] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:41.710 [2024-12-06 20:56:58.629023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.629029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:41.710 [2024-12-06 20:56:58.629035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:41.710 [2024-12-06 20:56:58.629041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:41.710 [2024-12-06 20:56:58.629048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:41.710 [2024-12-06 20:56:58.629053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.710 [2024-12-06 20:56:58.629060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:41.710 [2024-12-06 20:56:58.629066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:29:41.710 [2024-12-06 20:56:58.629073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.710 [2024-12-06 20:56:58.629101] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:41.710 [2024-12-06 20:56:58.629111] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:45.002 [2024-12-06 20:57:01.407396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.407453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:45.002 [2024-12-06 20:57:01.407467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2778.283 ms 00:29:45.002 [2024-12-06 20:57:01.407478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.432833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.432875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:45.002 [2024-12-06 20:57:01.432897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.166 ms 00:29:45.002 [2024-12-06 20:57:01.432908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.432976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.432988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:45.002 [2024-12-06 20:57:01.432996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:45.002 [2024-12-06 20:57:01.433010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.463530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.463565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:45.002 [2024-12-06 20:57:01.463576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.472 ms 00:29:45.002 [2024-12-06 20:57:01.463586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.463613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.463625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:45.002 [2024-12-06 20:57:01.463634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:45.002 [2024-12-06 20:57:01.463643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.464013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.464033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:45.002 [2024-12-06 20:57:01.464048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:29:45.002 [2024-12-06 20:57:01.464057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.464106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.464116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:45.002 [2024-12-06 20:57:01.464126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:45.002 [2024-12-06 20:57:01.464137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.478178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.478208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:45.002 [2024-12-06 20:57:01.478218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.024 ms 00:29:45.002 [2024-12-06 20:57:01.478227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.503246] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:45.002 [2024-12-06 20:57:01.504436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.504466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:45.002 [2024-12-06 20:57:01.504479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.141 ms 00:29:45.002 [2024-12-06 20:57:01.504487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.002 [2024-12-06 20:57:01.528362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.002 [2024-12-06 20:57:01.528395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:45.003 [2024-12-06 20:57:01.528409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.840 ms 00:29:45.003 [2024-12-06 20:57:01.528417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.528500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.528512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:45.003 [2024-12-06 20:57:01.528525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:45.003 [2024-12-06 20:57:01.528532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.551262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.551290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:45.003 [2024-12-06 20:57:01.551303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.684 ms 00:29:45.003 [2024-12-06 20:57:01.551312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.574365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.574393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:45.003 [2024-12-06 20:57:01.574405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.013 ms 00:29:45.003 [2024-12-06 20:57:01.574412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.574980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.574996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:45.003 [2024-12-06 20:57:01.575006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:29:45.003 [2024-12-06 20:57:01.575015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.647601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.647632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:45.003 [2024-12-06 20:57:01.647648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.552 ms 00:29:45.003 [2024-12-06 20:57:01.647656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.672269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.672301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:45.003 [2024-12-06 20:57:01.672314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.531 ms 00:29:45.003 [2024-12-06 20:57:01.672322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.696029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.696081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:45.003 [2024-12-06 20:57:01.696094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.667 ms 00:29:45.003 [2024-12-06 20:57:01.696101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.720133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.720165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:45.003 [2024-12-06 20:57:01.720177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.994 ms 00:29:45.003 [2024-12-06 20:57:01.720185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.720224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.720232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:45.003 [2024-12-06 20:57:01.720244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:45.003 [2024-12-06 20:57:01.720252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.720324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.003 [2024-12-06 20:57:01.720335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:45.003 [2024-12-06 20:57:01.720345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:45.003 [2024-12-06 20:57:01.720352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.003 [2024-12-06 20:57:01.721276] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3102.290 ms, result 0 00:29:45.003 { 00:29:45.003 "name": "ftl", 00:29:45.003 "uuid": "46369d51-32a1-4457-b2d5-a2bb657f137b" 00:29:45.003 } 00:29:45.003 20:57:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:45.003 [2024-12-06 20:57:01.924637] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.003 20:57:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:45.261 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:45.261 [2024-12-06 20:57:02.333032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:45.261 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:45.520 [2024-12-06 20:57:02.541559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:45.520 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:45.778 Fill FTL, iteration 1 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82631 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82631 /var/tmp/spdk.tgt.sock 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82631 ']' 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:45.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.778 20:57:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:46.037 [2024-12-06 20:57:02.968014] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:29:46.037 [2024-12-06 20:57:02.968136] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82631 ] 00:29:46.037 [2024-12-06 20:57:03.127351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.295 [2024-12-06 20:57:03.225039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.860 20:57:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.860 20:57:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:46.860 20:57:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:47.116 ftln1 00:29:47.116 20:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:47.116 20:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82631 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82631 ']' 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82631 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82631 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:47.374 killing process with pid 82631 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82631' 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82631 00:29:47.374 20:57:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82631 00:29:48.745 20:57:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:48.745 20:57:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:48.745 [2024-12-06 20:57:05.848517] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:29:48.745 [2024-12-06 20:57:05.848627] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82673 ] 00:29:49.002 [2024-12-06 20:57:06.006024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.002 [2024-12-06 20:57:06.101224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.371  [2024-12-06T20:57:08.878Z] Copying: 216/1024 [MB] (216 MBps) [2024-12-06T20:57:09.811Z] Copying: 471/1024 [MB] (255 MBps) [2024-12-06T20:57:10.744Z] Copying: 729/1024 [MB] (258 MBps) [2024-12-06T20:57:10.744Z] Copying: 996/1024 [MB] (267 MBps) [2024-12-06T20:57:11.310Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:29:54.177 00:29:54.177 Calculate MD5 checksum, iteration 1 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:54.177 20:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:54.177 [2024-12-06 20:57:11.203634] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:29:54.177 [2024-12-06 20:57:11.203747] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82738 ] 00:29:54.435 [2024-12-06 20:57:11.357389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.435 [2024-12-06 20:57:11.431943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.809  [2024-12-06T20:57:13.200Z] Copying: 715/1024 [MB] (715 MBps) [2024-12-06T20:57:13.768Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:29:56.635 00:29:56.635 20:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:56.635 20:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:58.598 Fill FTL, iteration 2 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=48f1ee6275f745ec0dc81440f68e238f 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:58.598 20:57:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:58.855 [2024-12-06 20:57:15.733248] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:29:58.855 [2024-12-06 20:57:15.733366] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82792 ] 00:29:58.855 [2024-12-06 20:57:15.891425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.855 [2024-12-06 20:57:15.984215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.228  [2024-12-06T20:57:18.736Z] Copying: 216/1024 [MB] (216 MBps) [2024-12-06T20:57:19.669Z] Copying: 446/1024 [MB] (230 MBps) [2024-12-06T20:57:20.601Z] Copying: 696/1024 [MB] (250 MBps) [2024-12-06T20:57:20.601Z] Copying: 959/1024 [MB] (263 MBps) [2024-12-06T20:57:21.168Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:30:04.035 00:30:04.293 Calculate MD5 checksum, iteration 2 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.293 20:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:04.293 [2024-12-06 20:57:21.228190] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:04.293 [2024-12-06 20:57:21.228282] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82845 ] 00:30:04.293 [2024-12-06 20:57:21.374119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.549 [2024-12-06 20:57:21.452030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.824  [2024-12-06T20:57:25.215Z] Copying: 661/1024 [MB] (661 MBps) [2024-12-06T20:57:26.145Z] Copying: 1024/1024 [MB] (average 657 MBps) 00:30:09.012 00:30:09.012 20:57:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:09.012 20:57:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:11.535 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:11.535 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9beac6cc3cdf509bdebffdd7f971a552 00:30:11.535 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:11.535 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:11.535 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:11.535 [2024-12-06 20:57:28.238555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.535 [2024-12-06 20:57:28.238598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:11.535 [2024-12-06 20:57:28.238610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:11.535 [2024-12-06 20:57:28.238617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.535 [2024-12-06 20:57:28.238637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.535 [2024-12-06 20:57:28.238646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:11.535 [2024-12-06 20:57:28.238654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:11.535 [2024-12-06 20:57:28.238661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.535 [2024-12-06 20:57:28.238677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.535 [2024-12-06 20:57:28.238684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:11.535 [2024-12-06 20:57:28.238691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:11.535 [2024-12-06 20:57:28.238697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.535 [2024-12-06 20:57:28.238748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.182 ms, result 0 00:30:11.535 true 00:30:11.535 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:11.535 { 00:30:11.535 "name": "ftl", 00:30:11.535 "properties": [ 00:30:11.535 { 00:30:11.535 "name": "superblock_version", 00:30:11.535 "value": 5, 00:30:11.535 "read-only": true 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "name": "base_device", 00:30:11.535 "bands": [ 00:30:11.535 { 00:30:11.535 "id": 0, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 1, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 2, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 3, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 4, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 5, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 6, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 7, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 8, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 9, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 10, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 11, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 12, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 13, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 14, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 15, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 16, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 17, 00:30:11.535 "state": "FREE", 00:30:11.535 "validity": 0.0 00:30:11.535 } 00:30:11.535 ], 00:30:11.535 "read-only": true 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "name": "cache_device", 00:30:11.535 "type": "bdev", 00:30:11.535 "chunks": [ 00:30:11.535 { 00:30:11.535 "id": 0, 00:30:11.535 "state": "INACTIVE", 00:30:11.535 "utilization": 0.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 1, 00:30:11.535 "state": "CLOSED", 00:30:11.535 "utilization": 1.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 2, 00:30:11.535 "state": "CLOSED", 00:30:11.535 "utilization": 1.0 00:30:11.535 }, 00:30:11.535 { 00:30:11.535 "id": 3, 00:30:11.535 "state": "OPEN", 00:30:11.536 "utilization": 0.001953125 00:30:11.536 }, 00:30:11.536 { 00:30:11.536 "id": 4, 00:30:11.536 "state": "OPEN", 00:30:11.536 "utilization": 0.0 00:30:11.536 } 00:30:11.536 ], 00:30:11.536 "read-only": true 00:30:11.536 }, 00:30:11.536 { 00:30:11.536 "name": "verbose_mode", 00:30:11.536 "value": true, 00:30:11.536 "unit": "", 00:30:11.536 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:11.536 }, 00:30:11.536 { 00:30:11.536 "name": "prep_upgrade_on_shutdown", 00:30:11.536 "value": false, 00:30:11.536 "unit": "", 00:30:11.536 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:11.536 } 00:30:11.536 ] 00:30:11.536 } 00:30:11.536 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:11.536 [2024-12-06 20:57:28.634886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.536 [2024-12-06 20:57:28.634930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:11.536 [2024-12-06 20:57:28.634940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:11.536 [2024-12-06 20:57:28.634946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.536 [2024-12-06 20:57:28.634965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.536 [2024-12-06 20:57:28.634971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:11.536 [2024-12-06 20:57:28.634977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:11.536 [2024-12-06 20:57:28.634983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.536 [2024-12-06 20:57:28.634998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.536 [2024-12-06 20:57:28.635004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:11.536 [2024-12-06 20:57:28.635010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:11.536 [2024-12-06 20:57:28.635016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.536 [2024-12-06 20:57:28.635061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.166 ms, result 0 00:30:11.536 true 00:30:11.536 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:11.536 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:11.536 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:11.795 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:11.795 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:11.795 20:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:12.063 [2024-12-06 20:57:29.035210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.063 [2024-12-06 20:57:29.035324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:12.063 [2024-12-06 20:57:29.035337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:12.063 [2024-12-06 20:57:29.035343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.063 [2024-12-06 20:57:29.035365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.063 [2024-12-06 20:57:29.035371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:12.063 [2024-12-06 20:57:29.035377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:12.063 [2024-12-06 20:57:29.035383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.063 [2024-12-06 20:57:29.035397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.063 [2024-12-06 20:57:29.035404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:12.063 [2024-12-06 20:57:29.035410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:12.063 [2024-12-06 20:57:29.035415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.063 [2024-12-06 20:57:29.035460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:30:12.063 true 00:30:12.063 20:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:12.332 { 00:30:12.332 "name": "ftl", 00:30:12.332 "properties": [ 00:30:12.332 { 00:30:12.332 "name": "superblock_version", 00:30:12.332 "value": 5, 00:30:12.332 "read-only": true 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "name": "base_device", 00:30:12.332 "bands": [ 00:30:12.332 { 00:30:12.332 "id": 0, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 1, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 2, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 3, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 4, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 5, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 6, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 7, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 8, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 9, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 10, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 11, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 12, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 13, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 14, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 15, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 16, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 }, 00:30:12.332 { 00:30:12.332 "id": 17, 00:30:12.332 "state": "FREE", 00:30:12.332 "validity": 0.0 00:30:12.332 } 00:30:12.332 ], 00:30:12.332 "read-only": true 00:30:12.332 }, 00:30:12.332 { 00:30:12.333 "name": "cache_device", 00:30:12.333 "type": "bdev", 00:30:12.333 "chunks": [ 00:30:12.333 { 00:30:12.333 "id": 0, 00:30:12.333 "state": "INACTIVE", 00:30:12.333 "utilization": 0.0 00:30:12.333 }, 00:30:12.333 { 00:30:12.333 "id": 1, 00:30:12.333 "state": "CLOSED", 00:30:12.333 "utilization": 1.0 00:30:12.333 }, 00:30:12.333 { 00:30:12.333 "id": 2, 00:30:12.333 "state": "CLOSED", 00:30:12.333 "utilization": 1.0 00:30:12.333 }, 00:30:12.333 { 00:30:12.333 "id": 3, 00:30:12.333 "state": "OPEN", 00:30:12.333 "utilization": 0.001953125 00:30:12.333 }, 00:30:12.333 { 00:30:12.333 "id": 4, 00:30:12.333 "state": "OPEN", 00:30:12.333 "utilization": 0.0 00:30:12.333 } 00:30:12.333 ], 00:30:12.333 "read-only": true 00:30:12.333 }, 00:30:12.333 { 00:30:12.333 "name": "verbose_mode", 00:30:12.333 "value": true, 00:30:12.333 "unit": "", 00:30:12.333 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:12.333 }, 00:30:12.333 { 00:30:12.333 "name": "prep_upgrade_on_shutdown", 00:30:12.333 "value": true, 00:30:12.333 "unit": "", 00:30:12.333 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:12.333 } 00:30:12.333 ] 00:30:12.333 } 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82516 ]] 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82516 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82516 ']' 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82516 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82516 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82516' 00:30:12.333 killing process with pid 82516 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82516 00:30:12.333 20:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82516 00:30:12.898 [2024-12-06 20:57:29.807598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:12.898 [2024-12-06 20:57:29.818182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.898 [2024-12-06 20:57:29.818214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:12.898 [2024-12-06 20:57:29.818225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:12.898 [2024-12-06 20:57:29.818231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.898 [2024-12-06 20:57:29.818249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:12.898 [2024-12-06 20:57:29.820295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.898 [2024-12-06 20:57:29.820319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:12.898 [2024-12-06 20:57:29.820328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.036 ms 00:30:12.898 [2024-12-06 20:57:29.820334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.915 [2024-12-06 20:57:38.200936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.915 [2024-12-06 20:57:38.200982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:22.915 [2024-12-06 20:57:38.200997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8380.554 ms 00:30:22.915 [2024-12-06 20:57:38.201004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.915 [2024-12-06 20:57:38.202159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.915 [2024-12-06 20:57:38.202178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:22.915 [2024-12-06 20:57:38.202186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:30:22.915 [2024-12-06 20:57:38.202193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.915 [2024-12-06 20:57:38.203097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.915 [2024-12-06 20:57:38.203204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:22.915 [2024-12-06 20:57:38.203218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.884 ms 00:30:22.915 [2024-12-06 20:57:38.203228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.915 [2024-12-06 20:57:38.210651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.915 [2024-12-06 20:57:38.210747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:22.915 [2024-12-06 20:57:38.210758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.395 ms 00:30:22.915 [2024-12-06 20:57:38.210765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.915 [2024-12-06 20:57:38.215937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.215963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:22.916 [2024-12-06 20:57:38.215971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.150 ms 00:30:22.916 [2024-12-06 20:57:38.215978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.216031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.216043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:22.916 [2024-12-06 20:57:38.216050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:22.916 [2024-12-06 20:57:38.216055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.223018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.223042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:22.916 [2024-12-06 20:57:38.223049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.942 ms 00:30:22.916 [2024-12-06 20:57:38.223054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.230322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.230345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:22.916 [2024-12-06 20:57:38.230352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.244 ms 00:30:22.916 [2024-12-06 20:57:38.230357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.237563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.237659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:22.916 [2024-12-06 20:57:38.237670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.182 ms 00:30:22.916 [2024-12-06 20:57:38.237675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.244706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.244798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:22.916 [2024-12-06 20:57:38.244809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.987 ms 00:30:22.916 [2024-12-06 20:57:38.244815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.244836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:22.916 [2024-12-06 20:57:38.244852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:22.916 [2024-12-06 20:57:38.244860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:22.916 [2024-12-06 20:57:38.244866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:22.916 [2024-12-06 20:57:38.244872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:22.916 [2024-12-06 20:57:38.244975] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:22.916 [2024-12-06 20:57:38.244982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 46369d51-32a1-4457-b2d5-a2bb657f137b 00:30:22.916 [2024-12-06 20:57:38.244987] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:22.916 [2024-12-06 20:57:38.244993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:22.916 [2024-12-06 20:57:38.244998] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:22.916 [2024-12-06 20:57:38.245003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:22.916 [2024-12-06 20:57:38.245010] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:22.916 [2024-12-06 20:57:38.245016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:22.916 [2024-12-06 20:57:38.245024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:22.916 [2024-12-06 20:57:38.245029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:22.916 [2024-12-06 20:57:38.245034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:22.916 [2024-12-06 20:57:38.245043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.245049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:22.916 [2024-12-06 20:57:38.245055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:30:22.916 [2024-12-06 20:57:38.245061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.254479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.254503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:22.916 [2024-12-06 20:57:38.254515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.398 ms 00:30:22.916 [2024-12-06 20:57:38.254521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.254785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.916 [2024-12-06 20:57:38.254795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:22.916 [2024-12-06 20:57:38.254802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:30:22.916 [2024-12-06 20:57:38.254807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.287084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.916 [2024-12-06 20:57:38.287187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:22.916 [2024-12-06 20:57:38.287199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.916 [2024-12-06 20:57:38.287204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.287227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.916 [2024-12-06 20:57:38.287234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:22.916 [2024-12-06 20:57:38.287240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.916 [2024-12-06 20:57:38.287245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.287306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.916 [2024-12-06 20:57:38.287314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:22.916 [2024-12-06 20:57:38.287323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.916 [2024-12-06 20:57:38.287330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.916 [2024-12-06 20:57:38.287342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.916 [2024-12-06 20:57:38.287348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:22.916 [2024-12-06 20:57:38.287354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.287360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.345515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.345551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:22.917 [2024-12-06 20:57:38.345563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.345569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:22.917 [2024-12-06 20:57:38.394331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:22.917 [2024-12-06 20:57:38.394407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:22.917 [2024-12-06 20:57:38.394475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:22.917 [2024-12-06 20:57:38.394564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:22.917 [2024-12-06 20:57:38.394609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:22.917 [2024-12-06 20:57:38.394657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:22.917 [2024-12-06 20:57:38.394706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:22.917 [2024-12-06 20:57:38.394712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:22.917 [2024-12-06 20:57:38.394718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.917 [2024-12-06 20:57:38.394811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8576.584 ms, result 0 00:30:22.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83057 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83057 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83057 ']' 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:22.917 20:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:22.917 [2024-12-06 20:57:39.656484] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:22.917 [2024-12-06 20:57:39.656607] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83057 ] 00:30:22.917 [2024-12-06 20:57:39.813047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.917 [2024-12-06 20:57:39.890776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.488 [2024-12-06 20:57:40.464178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:23.488 [2024-12-06 20:57:40.464391] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:23.488 [2024-12-06 20:57:40.606925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.488 [2024-12-06 20:57:40.607048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:23.488 [2024-12-06 20:57:40.607063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:23.488 [2024-12-06 20:57:40.607070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.488 [2024-12-06 20:57:40.607115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.488 [2024-12-06 20:57:40.607123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:23.488 [2024-12-06 20:57:40.607129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:23.488 [2024-12-06 20:57:40.607135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.488 [2024-12-06 20:57:40.607155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:23.488 [2024-12-06 20:57:40.607690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:23.488 [2024-12-06 20:57:40.607702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.488 [2024-12-06 20:57:40.607708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:23.488 [2024-12-06 20:57:40.607715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.554 ms 00:30:23.488 [2024-12-06 20:57:40.607720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.488 [2024-12-06 20:57:40.608696] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:23.488 [2024-12-06 20:57:40.618243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.488 [2024-12-06 20:57:40.618346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:23.488 [2024-12-06 20:57:40.618363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.548 ms 00:30:23.488 [2024-12-06 20:57:40.618369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.488 [2024-12-06 20:57:40.618408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.488 [2024-12-06 20:57:40.618416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:23.488 [2024-12-06 20:57:40.618423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:23.488 [2024-12-06 20:57:40.618428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.622656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.622681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:23.751 [2024-12-06 20:57:40.622688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.180 ms 00:30:23.751 [2024-12-06 20:57:40.622694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.622735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.622742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:23.751 [2024-12-06 20:57:40.622749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:23.751 [2024-12-06 20:57:40.622754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.622789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.622798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:23.751 [2024-12-06 20:57:40.622804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:23.751 [2024-12-06 20:57:40.622810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.622825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:23.751 [2024-12-06 20:57:40.625472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.625573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:23.751 [2024-12-06 20:57:40.625585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.650 ms 00:30:23.751 [2024-12-06 20:57:40.625593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.625619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.625626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:23.751 [2024-12-06 20:57:40.625632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:23.751 [2024-12-06 20:57:40.625637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.625653] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:23.751 [2024-12-06 20:57:40.625669] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:23.751 [2024-12-06 20:57:40.625697] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:23.751 [2024-12-06 20:57:40.625708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:23.751 [2024-12-06 20:57:40.625786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:23.751 [2024-12-06 20:57:40.625794] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:23.751 [2024-12-06 20:57:40.625802] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:23.751 [2024-12-06 20:57:40.625810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:23.751 [2024-12-06 20:57:40.625817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:23.751 [2024-12-06 20:57:40.625825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:23.751 [2024-12-06 20:57:40.625830] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:23.751 [2024-12-06 20:57:40.625836] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:23.751 [2024-12-06 20:57:40.625842] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:23.751 [2024-12-06 20:57:40.625848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.625853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:23.751 [2024-12-06 20:57:40.625858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:30:23.751 [2024-12-06 20:57:40.625864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.625944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.751 [2024-12-06 20:57:40.625951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:23.751 [2024-12-06 20:57:40.625959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:23.751 [2024-12-06 20:57:40.625965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.751 [2024-12-06 20:57:40.626042] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:23.751 [2024-12-06 20:57:40.626049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:23.751 [2024-12-06 20:57:40.626055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:23.751 [2024-12-06 20:57:40.626061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.751 [2024-12-06 20:57:40.626067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:23.751 [2024-12-06 20:57:40.626072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:23.751 [2024-12-06 20:57:40.626077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:23.751 [2024-12-06 20:57:40.626082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:23.751 [2024-12-06 20:57:40.626088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:23.751 [2024-12-06 20:57:40.626093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.751 [2024-12-06 20:57:40.626098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:23.751 [2024-12-06 20:57:40.626104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:23.751 [2024-12-06 20:57:40.626109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.751 [2024-12-06 20:57:40.626115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:23.751 [2024-12-06 20:57:40.626124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:23.751 [2024-12-06 20:57:40.626129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.751 [2024-12-06 20:57:40.626134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:23.751 [2024-12-06 20:57:40.626139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:23.751 [2024-12-06 20:57:40.626144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.751 [2024-12-06 20:57:40.626149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:23.751 [2024-12-06 20:57:40.626154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:23.751 [2024-12-06 20:57:40.626159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.751 [2024-12-06 20:57:40.626164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:23.751 [2024-12-06 20:57:40.626173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:23.751 [2024-12-06 20:57:40.626178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.751 [2024-12-06 20:57:40.626183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:23.751 [2024-12-06 20:57:40.626188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:23.751 [2024-12-06 20:57:40.626192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.752 [2024-12-06 20:57:40.626197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:23.752 [2024-12-06 20:57:40.626202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:23.752 [2024-12-06 20:57:40.626207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.752 [2024-12-06 20:57:40.626212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:23.752 [2024-12-06 20:57:40.626217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:23.752 [2024-12-06 20:57:40.626222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.752 [2024-12-06 20:57:40.626226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:23.752 [2024-12-06 20:57:40.626231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:23.752 [2024-12-06 20:57:40.626236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.752 [2024-12-06 20:57:40.626241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:23.752 [2024-12-06 20:57:40.626246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:23.752 [2024-12-06 20:57:40.626250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.752 [2024-12-06 20:57:40.626256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:23.752 [2024-12-06 20:57:40.626261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:23.752 [2024-12-06 20:57:40.626266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.752 [2024-12-06 20:57:40.626270] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:23.752 [2024-12-06 20:57:40.626276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:23.752 [2024-12-06 20:57:40.626282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:23.752 [2024-12-06 20:57:40.626288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.752 [2024-12-06 20:57:40.626295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:23.752 [2024-12-06 20:57:40.626300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:23.752 [2024-12-06 20:57:40.626305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:23.752 [2024-12-06 20:57:40.626311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:23.752 [2024-12-06 20:57:40.626315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:23.752 [2024-12-06 20:57:40.626321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:23.752 [2024-12-06 20:57:40.626327] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:23.752 [2024-12-06 20:57:40.626334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:23.752 [2024-12-06 20:57:40.626345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:23.752 [2024-12-06 20:57:40.626361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:23.752 [2024-12-06 20:57:40.626367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:23.752 [2024-12-06 20:57:40.626372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:23.752 [2024-12-06 20:57:40.626378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:23.752 [2024-12-06 20:57:40.626414] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:23.752 [2024-12-06 20:57:40.626420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:23.752 [2024-12-06 20:57:40.626432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:23.752 [2024-12-06 20:57:40.626437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:23.752 [2024-12-06 20:57:40.626442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:23.752 [2024-12-06 20:57:40.626448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.752 [2024-12-06 20:57:40.626454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:23.752 [2024-12-06 20:57:40.626460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.459 ms 00:30:23.752 [2024-12-06 20:57:40.626465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.752 [2024-12-06 20:57:40.626499] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:23.752 [2024-12-06 20:57:40.626506] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:27.968 [2024-12-06 20:57:44.426001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.426084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:27.968 [2024-12-06 20:57:44.426103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3799.487 ms 00:30:27.968 [2024-12-06 20:57:44.426125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.458154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.458403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.968 [2024-12-06 20:57:44.458426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.776 ms 00:30:27.968 [2024-12-06 20:57:44.458436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.458541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.458561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:27.968 [2024-12-06 20:57:44.458572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:27.968 [2024-12-06 20:57:44.458580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.494053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.494103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.968 [2024-12-06 20:57:44.494119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.430 ms 00:30:27.968 [2024-12-06 20:57:44.494128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.494164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.494174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.968 [2024-12-06 20:57:44.494183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:27.968 [2024-12-06 20:57:44.494191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.494774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.494799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.968 [2024-12-06 20:57:44.494810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:30:27.968 [2024-12-06 20:57:44.494819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.494875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.494886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.968 [2024-12-06 20:57:44.494930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:27.968 [2024-12-06 20:57:44.494938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.512488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.512536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.968 [2024-12-06 20:57:44.512547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.521 ms 00:30:27.968 [2024-12-06 20:57:44.512555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.541407] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:27.968 [2024-12-06 20:57:44.541664] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:27.968 [2024-12-06 20:57:44.541690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.541701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:27.968 [2024-12-06 20:57:44.541715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.991 ms 00:30:27.968 [2024-12-06 20:57:44.541724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.557294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.557348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:27.968 [2024-12-06 20:57:44.557361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.512 ms 00:30:27.968 [2024-12-06 20:57:44.557369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.569884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.569942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:27.968 [2024-12-06 20:57:44.569954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.456 ms 00:30:27.968 [2024-12-06 20:57:44.569961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.582544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.582593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:27.968 [2024-12-06 20:57:44.582605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.532 ms 00:30:27.968 [2024-12-06 20:57:44.582612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.583290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.583318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:27.968 [2024-12-06 20:57:44.583329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:30:27.968 [2024-12-06 20:57:44.583337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.648266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.648549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:27.968 [2024-12-06 20:57:44.648575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.906 ms 00:30:27.968 [2024-12-06 20:57:44.648585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.660004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:27.968 [2024-12-06 20:57:44.661064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.661107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:27.968 [2024-12-06 20:57:44.661119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.395 ms 00:30:27.968 [2024-12-06 20:57:44.661128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.661238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.661252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:27.968 [2024-12-06 20:57:44.661263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:27.968 [2024-12-06 20:57:44.661271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.661333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.661344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:27.968 [2024-12-06 20:57:44.661353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:27.968 [2024-12-06 20:57:44.661361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.661385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.661394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:27.968 [2024-12-06 20:57:44.661405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:27.968 [2024-12-06 20:57:44.661414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.661452] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:27.968 [2024-12-06 20:57:44.661463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.661471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:27.968 [2024-12-06 20:57:44.661479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:27.968 [2024-12-06 20:57:44.661488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.686829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.686885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:27.968 [2024-12-06 20:57:44.686917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.320 ms 00:30:27.968 [2024-12-06 20:57:44.686926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.687014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.687025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:27.968 [2024-12-06 20:57:44.687035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:27.968 [2024-12-06 20:57:44.687060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.688947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4081.448 ms, result 0 00:30:27.968 [2024-12-06 20:57:44.703277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.968 [2024-12-06 20:57:44.719289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:27.968 [2024-12-06 20:57:44.727469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:27.968 20:57:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.968 20:57:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:27.968 20:57:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:27.968 20:57:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:27.968 20:57:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:27.968 [2024-12-06 20:57:44.967511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.967733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:27.968 [2024-12-06 20:57:44.967817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:27.968 [2024-12-06 20:57:44.967843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.967909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.967935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:27.968 [2024-12-06 20:57:44.967957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:27.968 [2024-12-06 20:57:44.967976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.968010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.968 [2024-12-06 20:57:44.968031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:27.968 [2024-12-06 20:57:44.968053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:27.968 [2024-12-06 20:57:44.968224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.968 [2024-12-06 20:57:44.968320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.795 ms, result 0 00:30:27.968 true 00:30:27.968 20:57:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.229 { 00:30:28.229 "name": "ftl", 00:30:28.229 "properties": [ 00:30:28.229 { 00:30:28.229 "name": "superblock_version", 00:30:28.229 "value": 5, 00:30:28.229 "read-only": true 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "name": "base_device", 00:30:28.229 "bands": [ 00:30:28.229 { 00:30:28.229 "id": 0, 00:30:28.229 "state": "CLOSED", 00:30:28.229 "validity": 1.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 1, 00:30:28.229 "state": "CLOSED", 00:30:28.229 "validity": 1.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 2, 00:30:28.229 "state": "CLOSED", 00:30:28.229 "validity": 0.007843137254901933 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 3, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 4, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 5, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 6, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 7, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 8, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 9, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 10, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 11, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 12, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 13, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 14, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 15, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 16, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 17, 00:30:28.229 "state": "FREE", 00:30:28.229 "validity": 0.0 00:30:28.229 } 00:30:28.229 ], 00:30:28.229 "read-only": true 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "name": "cache_device", 00:30:28.229 "type": "bdev", 00:30:28.229 "chunks": [ 00:30:28.229 { 00:30:28.229 "id": 0, 00:30:28.229 "state": "INACTIVE", 00:30:28.229 "utilization": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 1, 00:30:28.229 "state": "OPEN", 00:30:28.229 "utilization": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 2, 00:30:28.229 "state": "OPEN", 00:30:28.229 "utilization": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 3, 00:30:28.229 "state": "FREE", 00:30:28.229 "utilization": 0.0 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "id": 4, 00:30:28.229 "state": "FREE", 00:30:28.229 "utilization": 0.0 00:30:28.229 } 00:30:28.229 ], 00:30:28.229 "read-only": true 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "name": "verbose_mode", 00:30:28.229 "value": true, 00:30:28.229 "unit": "", 00:30:28.229 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:28.229 }, 00:30:28.229 { 00:30:28.229 "name": "prep_upgrade_on_shutdown", 00:30:28.229 "value": false, 00:30:28.229 "unit": "", 00:30:28.229 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:28.229 } 00:30:28.229 ] 00:30:28.229 } 00:30:28.229 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:28.229 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.229 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:28.490 Validate MD5 checksum, iteration 1 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.490 20:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:28.750 [2024-12-06 20:57:45.646021] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:28.750 [2024-12-06 20:57:45.646330] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83136 ] 00:30:28.750 [2024-12-06 20:57:45.810532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.011 [2024-12-06 20:57:45.929156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.458  [2024-12-06T20:57:48.534Z] Copying: 580/1024 [MB] (580 MBps) [2024-12-06T20:57:49.922Z] Copying: 1024/1024 [MB] (average 584 MBps) 00:30:32.789 00:30:32.789 20:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:32.789 20:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=48f1ee6275f745ec0dc81440f68e238f 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 48f1ee6275f745ec0dc81440f68e238f != \4\8\f\1\e\e\6\2\7\5\f\7\4\5\e\c\0\d\c\8\1\4\4\0\f\6\8\e\2\3\8\f ]] 00:30:34.698 Validate MD5 checksum, iteration 2 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:34.698 20:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:34.698 [2024-12-06 20:57:51.728338] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:34.698 [2024-12-06 20:57:51.728448] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83204 ] 00:30:34.959 [2024-12-06 20:57:51.887252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.959 [2024-12-06 20:57:51.981263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.873  [2024-12-06T20:57:54.006Z] Copying: 692/1024 [MB] (692 MBps) [2024-12-06T20:57:58.212Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:30:41.079 00:30:41.079 20:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:41.079 20:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9beac6cc3cdf509bdebffdd7f971a552 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9beac6cc3cdf509bdebffdd7f971a552 != \9\b\e\a\c\6\c\c\3\c\d\f\5\0\9\b\d\e\b\f\f\d\d\7\f\9\7\1\a\5\5\2 ]] 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83057 ]] 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83057 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:42.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83282 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83282 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83282 ']' 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.020 20:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.020 [2024-12-06 20:57:59.118991] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:42.020 [2024-12-06 20:57:59.119285] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83282 ] 00:30:42.281 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83057 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:42.281 [2024-12-06 20:57:59.275939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.281 [2024-12-06 20:57:59.353384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.852 [2024-12-06 20:57:59.925865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.852 [2024-12-06 20:57:59.926044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:43.114 [2024-12-06 20:58:00.068982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.114 [2024-12-06 20:58:00.069116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:43.114 [2024-12-06 20:58:00.069168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:43.114 [2024-12-06 20:58:00.069187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.114 [2024-12-06 20:58:00.069247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.114 [2024-12-06 20:58:00.069323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:43.114 [2024-12-06 20:58:00.069342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:43.114 [2024-12-06 20:58:00.069357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.114 [2024-12-06 20:58:00.069411] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:43.114 [2024-12-06 20:58:00.069943] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:43.114 [2024-12-06 20:58:00.070018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.114 [2024-12-06 20:58:00.070054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:43.114 [2024-12-06 20:58:00.070072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:30:43.114 [2024-12-06 20:58:00.070087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.114 [2024-12-06 20:58:00.070333] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:43.114 [2024-12-06 20:58:00.083000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.114 [2024-12-06 20:58:00.083105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:43.114 [2024-12-06 20:58:00.083151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.667 ms 00:30:43.114 [2024-12-06 20:58:00.083160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.114 [2024-12-06 20:58:00.089901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.114 [2024-12-06 20:58:00.089931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:43.115 [2024-12-06 20:58:00.089939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:43.115 [2024-12-06 20:58:00.089945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.090186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.090195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:43.115 [2024-12-06 20:58:00.090202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:30:43.115 [2024-12-06 20:58:00.090208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.090247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.090254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:43.115 [2024-12-06 20:58:00.090261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:43.115 [2024-12-06 20:58:00.090267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.090287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.090293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:43.115 [2024-12-06 20:58:00.090300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:43.115 [2024-12-06 20:58:00.090306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.090321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:43.115 [2024-12-06 20:58:00.092678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.092771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:43.115 [2024-12-06 20:58:00.092783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.361 ms 00:30:43.115 [2024-12-06 20:58:00.092789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.092814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.092821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:43.115 [2024-12-06 20:58:00.092827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:43.115 [2024-12-06 20:58:00.092834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.092850] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:43.115 [2024-12-06 20:58:00.092865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:43.115 [2024-12-06 20:58:00.092906] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:43.115 [2024-12-06 20:58:00.092920] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:43.115 [2024-12-06 20:58:00.093000] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:43.115 [2024-12-06 20:58:00.093008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:43.115 [2024-12-06 20:58:00.093016] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:43.115 [2024-12-06 20:58:00.093024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093038] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:43.115 [2024-12-06 20:58:00.093043] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:43.115 [2024-12-06 20:58:00.093049] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:43.115 [2024-12-06 20:58:00.093054] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:43.115 [2024-12-06 20:58:00.093062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.093067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:43.115 [2024-12-06 20:58:00.093073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:30:43.115 [2024-12-06 20:58:00.093078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.093142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.115 [2024-12-06 20:58:00.093149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:43.115 [2024-12-06 20:58:00.093155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:43.115 [2024-12-06 20:58:00.093160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.115 [2024-12-06 20:58:00.093234] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:43.115 [2024-12-06 20:58:00.093244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:43.115 [2024-12-06 20:58:00.093250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:43.115 [2024-12-06 20:58:00.093268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:43.115 [2024-12-06 20:58:00.093278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:43.115 [2024-12-06 20:58:00.093285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:43.115 [2024-12-06 20:58:00.093290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:43.115 [2024-12-06 20:58:00.093300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:43.115 [2024-12-06 20:58:00.093306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:43.115 [2024-12-06 20:58:00.093317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:43.115 [2024-12-06 20:58:00.093322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:43.115 [2024-12-06 20:58:00.093331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:43.115 [2024-12-06 20:58:00.093336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:43.115 [2024-12-06 20:58:00.093346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:43.115 [2024-12-06 20:58:00.093355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:43.115 [2024-12-06 20:58:00.093365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:43.115 [2024-12-06 20:58:00.093370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:43.115 [2024-12-06 20:58:00.093380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:43.115 [2024-12-06 20:58:00.093385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:43.115 [2024-12-06 20:58:00.093395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:43.115 [2024-12-06 20:58:00.093399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:43.115 [2024-12-06 20:58:00.093409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:43.115 [2024-12-06 20:58:00.093414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:43.115 [2024-12-06 20:58:00.093424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:43.115 [2024-12-06 20:58:00.093438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:43.115 [2024-12-06 20:58:00.093453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:43.115 [2024-12-06 20:58:00.093458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093463] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:43.115 [2024-12-06 20:58:00.093470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:43.115 [2024-12-06 20:58:00.093476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.115 [2024-12-06 20:58:00.093487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:43.115 [2024-12-06 20:58:00.093492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:43.115 [2024-12-06 20:58:00.093498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:43.115 [2024-12-06 20:58:00.093503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:43.115 [2024-12-06 20:58:00.093508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:43.115 [2024-12-06 20:58:00.093513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:43.115 [2024-12-06 20:58:00.093519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:43.115 [2024-12-06 20:58:00.093526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.115 [2024-12-06 20:58:00.093532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:43.115 [2024-12-06 20:58:00.093538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:43.116 [2024-12-06 20:58:00.093553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:43.116 [2024-12-06 20:58:00.093559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:43.116 [2024-12-06 20:58:00.093564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:43.116 [2024-12-06 20:58:00.093570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:43.116 [2024-12-06 20:58:00.093607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:43.116 [2024-12-06 20:58:00.093612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:43.116 [2024-12-06 20:58:00.093626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:43.116 [2024-12-06 20:58:00.093631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:43.116 [2024-12-06 20:58:00.093637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:43.116 [2024-12-06 20:58:00.093642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.093652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:43.116 [2024-12-06 20:58:00.093657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:30:43.116 [2024-12-06 20:58:00.093663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.112687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.112716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:43.116 [2024-12-06 20:58:00.112723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.988 ms 00:30:43.116 [2024-12-06 20:58:00.112730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.112759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.112765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:43.116 [2024-12-06 20:58:00.112771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:43.116 [2024-12-06 20:58:00.112777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.137133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.137160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:43.116 [2024-12-06 20:58:00.137168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.314 ms 00:30:43.116 [2024-12-06 20:58:00.137174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.137195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.137201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:43.116 [2024-12-06 20:58:00.137208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:43.116 [2024-12-06 20:58:00.137216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.137291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.137299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:43.116 [2024-12-06 20:58:00.137306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:43.116 [2024-12-06 20:58:00.137312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.137341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.137348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:43.116 [2024-12-06 20:58:00.137354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:43.116 [2024-12-06 20:58:00.137359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.148765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.148793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:43.116 [2024-12-06 20:58:00.148800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.384 ms 00:30:43.116 [2024-12-06 20:58:00.148806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.148878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.148902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:43.116 [2024-12-06 20:58:00.148909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:43.116 [2024-12-06 20:58:00.148915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.178161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.178194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:43.116 [2024-12-06 20:58:00.178205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.230 ms 00:30:43.116 [2024-12-06 20:58:00.178211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.185311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.185338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:43.116 [2024-12-06 20:58:00.185352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:30:43.116 [2024-12-06 20:58:00.185358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.228332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.228376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:43.116 [2024-12-06 20:58:00.228386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.931 ms 00:30:43.116 [2024-12-06 20:58:00.228392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.228496] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:43.116 [2024-12-06 20:58:00.228572] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:43.116 [2024-12-06 20:58:00.228644] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:43.116 [2024-12-06 20:58:00.228716] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:43.116 [2024-12-06 20:58:00.228723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.228730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:43.116 [2024-12-06 20:58:00.228737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:30:43.116 [2024-12-06 20:58:00.228743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.228783] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:43.116 [2024-12-06 20:58:00.228793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.228801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:43.116 [2024-12-06 20:58:00.228808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:43.116 [2024-12-06 20:58:00.228814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.116 [2024-12-06 20:58:00.239909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.116 [2024-12-06 20:58:00.239941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:43.116 [2024-12-06 20:58:00.239950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.078 ms 00:30:43.116 [2024-12-06 20:58:00.239956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.378 [2024-12-06 20:58:00.246348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.378 [2024-12-06 20:58:00.246385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:43.378 [2024-12-06 20:58:00.246392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:43.378 [2024-12-06 20:58:00.246398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.378 [2024-12-06 20:58:00.246463] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:43.378 [2024-12-06 20:58:00.246573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.378 [2024-12-06 20:58:00.246581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:43.378 [2024-12-06 20:58:00.246588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.111 ms 00:30:43.378 [2024-12-06 20:58:00.246593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.950 [2024-12-06 20:58:00.776911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.950 [2024-12-06 20:58:00.776985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:43.950 [2024-12-06 20:58:00.777001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 529.681 ms 00:30:43.950 [2024-12-06 20:58:00.777010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.950 [2024-12-06 20:58:00.780937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.950 [2024-12-06 20:58:00.780978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:43.950 [2024-12-06 20:58:00.780990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.927 ms 00:30:43.950 [2024-12-06 20:58:00.780998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.950 [2024-12-06 20:58:00.781524] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:43.950 [2024-12-06 20:58:00.781556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.950 [2024-12-06 20:58:00.781565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:43.950 [2024-12-06 20:58:00.781575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:30:43.950 [2024-12-06 20:58:00.781583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.950 [2024-12-06 20:58:00.781614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.950 [2024-12-06 20:58:00.781625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:43.950 [2024-12-06 20:58:00.781634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:43.950 [2024-12-06 20:58:00.781647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.950 [2024-12-06 20:58:00.781682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 535.215 ms, result 0 00:30:43.950 [2024-12-06 20:58:00.781719] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:43.950 [2024-12-06 20:58:00.781803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.950 [2024-12-06 20:58:00.781813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:43.950 [2024-12-06 20:58:00.781822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:30:43.950 [2024-12-06 20:58:00.781829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.523 [2024-12-06 20:58:01.418684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.523 [2024-12-06 20:58:01.418758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:44.523 [2024-12-06 20:58:01.418785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 635.875 ms 00:30:44.523 [2024-12-06 20:58:01.418793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.523 [2024-12-06 20:58:01.422465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.523 [2024-12-06 20:58:01.422511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:44.523 [2024-12-06 20:58:01.422520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:30:44.523 [2024-12-06 20:58:01.422528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.523 [2024-12-06 20:58:01.422877] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:44.523 [2024-12-06 20:58:01.422923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.523 [2024-12-06 20:58:01.422930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:44.523 [2024-12-06 20:58:01.422939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:30:44.524 [2024-12-06 20:58:01.422947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.422978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.422986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:44.524 [2024-12-06 20:58:01.422994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:44.524 [2024-12-06 20:58:01.423000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.423032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 641.307 ms, result 0 00:30:44.524 [2024-12-06 20:58:01.423072] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:44.524 [2024-12-06 20:58:01.423081] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:44.524 [2024-12-06 20:58:01.423090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.423097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:44.524 [2024-12-06 20:58:01.423104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1176.643 ms 00:30:44.524 [2024-12-06 20:58:01.423111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.423137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.423148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:44.524 [2024-12-06 20:58:01.423156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.524 [2024-12-06 20:58:01.423163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.433299] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:44.524 [2024-12-06 20:58:01.433402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.433412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:44.524 [2024-12-06 20:58:01.433421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.226 ms 00:30:44.524 [2024-12-06 20:58:01.433428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.434000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.434020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:44.524 [2024-12-06 20:58:01.434031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:30:44.524 [2024-12-06 20:58:01.434037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.435719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.435740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:44.524 [2024-12-06 20:58:01.435749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.667 ms 00:30:44.524 [2024-12-06 20:58:01.435755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.435789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.435797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:44.524 [2024-12-06 20:58:01.435804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:44.524 [2024-12-06 20:58:01.435813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.435907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.435916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:44.524 [2024-12-06 20:58:01.435924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:44.524 [2024-12-06 20:58:01.435930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.435949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.435957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:44.524 [2024-12-06 20:58:01.435965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:44.524 [2024-12-06 20:58:01.435971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.436000] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:44.524 [2024-12-06 20:58:01.436008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.436014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:44.524 [2024-12-06 20:58:01.436021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:44.524 [2024-12-06 20:58:01.436027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.436079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.524 [2024-12-06 20:58:01.436087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:44.524 [2024-12-06 20:58:01.436094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:44.524 [2024-12-06 20:58:01.436099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.524 [2024-12-06 20:58:01.437088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1367.690 ms, result 0 00:30:44.524 [2024-12-06 20:58:01.449733] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.524 [2024-12-06 20:58:01.465720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:44.524 [2024-12-06 20:58:01.473847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:44.787 Validate MD5 checksum, iteration 1 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.787 20:58:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.787 [2024-12-06 20:58:01.729076] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:44.787 [2024-12-06 20:58:01.729318] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83317 ] 00:30:44.787 [2024-12-06 20:58:01.888708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.048 [2024-12-06 20:58:01.980631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.504  [2024-12-06T20:58:04.211Z] Copying: 649/1024 [MB] (649 MBps) [2024-12-06T20:58:05.594Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:30:48.461 00:30:48.461 20:58:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:48.461 20:58:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:50.374 Validate MD5 checksum, iteration 2 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=48f1ee6275f745ec0dc81440f68e238f 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 48f1ee6275f745ec0dc81440f68e238f != \4\8\f\1\e\e\6\2\7\5\f\7\4\5\e\c\0\d\c\8\1\4\4\0\f\6\8\e\2\3\8\f ]] 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:50.374 20:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:50.374 [2024-12-06 20:58:07.456866] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:50.374 [2024-12-06 20:58:07.456989] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83378 ] 00:30:50.635 [2024-12-06 20:58:07.614857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.635 [2024-12-06 20:58:07.706981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.549  [2024-12-06T20:58:09.943Z] Copying: 678/1024 [MB] (678 MBps) [2024-12-06T20:58:10.514Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:30:53.381 00:30:53.381 20:58:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:53.381 20:58:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9beac6cc3cdf509bdebffdd7f971a552 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9beac6cc3cdf509bdebffdd7f971a552 != \9\b\e\a\c\6\c\c\3\c\d\f\5\0\9\b\d\e\b\f\f\d\d\7\f\9\7\1\a\5\5\2 ]] 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83282 ]] 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83282 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83282 ']' 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83282 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83282 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83282' 00:30:55.922 killing process with pid 83282 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83282 00:30:55.922 20:58:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83282 00:30:56.183 [2024-12-06 20:58:13.216772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:56.183 [2024-12-06 20:58:13.228174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.228207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:56.183 [2024-12-06 20:58:13.228218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:56.183 [2024-12-06 20:58:13.228224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.228241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:56.183 [2024-12-06 20:58:13.230351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.230378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:56.183 [2024-12-06 20:58:13.230386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.100 ms 00:30:56.183 [2024-12-06 20:58:13.230392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.230567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.230575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:56.183 [2024-12-06 20:58:13.230582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:30:56.183 [2024-12-06 20:58:13.230587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.231661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.231758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:56.183 [2024-12-06 20:58:13.231770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.063 ms 00:30:56.183 [2024-12-06 20:58:13.231779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.232670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.232686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:56.183 [2024-12-06 20:58:13.232692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.866 ms 00:30:56.183 [2024-12-06 20:58:13.232698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.240243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.240270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:56.183 [2024-12-06 20:58:13.240281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.521 ms 00:30:56.183 [2024-12-06 20:58:13.240288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.244263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.244295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:56.183 [2024-12-06 20:58:13.244304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.948 ms 00:30:56.183 [2024-12-06 20:58:13.244311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.244367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.244374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:56.183 [2024-12-06 20:58:13.244381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:56.183 [2024-12-06 20:58:13.244390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.251304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.251329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:56.183 [2024-12-06 20:58:13.251336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.902 ms 00:30:56.183 [2024-12-06 20:58:13.251341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.258455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.183 [2024-12-06 20:58:13.258548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:56.183 [2024-12-06 20:58:13.258560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.090 ms 00:30:56.183 [2024-12-06 20:58:13.258566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.183 [2024-12-06 20:58:13.265679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.184 [2024-12-06 20:58:13.265704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:56.184 [2024-12-06 20:58:13.265711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.091 ms 00:30:56.184 [2024-12-06 20:58:13.265716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.184 [2024-12-06 20:58:13.272701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.184 [2024-12-06 20:58:13.272780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:56.184 [2024-12-06 20:58:13.272791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.940 ms 00:30:56.184 [2024-12-06 20:58:13.272796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.184 [2024-12-06 20:58:13.272818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:56.184 [2024-12-06 20:58:13.272828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:56.184 [2024-12-06 20:58:13.272835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:56.184 [2024-12-06 20:58:13.272842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:56.184 [2024-12-06 20:58:13.272848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:56.184 [2024-12-06 20:58:13.272949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:56.184 [2024-12-06 20:58:13.272955] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 46369d51-32a1-4457-b2d5-a2bb657f137b 00:30:56.184 [2024-12-06 20:58:13.272961] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:56.184 [2024-12-06 20:58:13.272967] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:56.184 [2024-12-06 20:58:13.272972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:56.184 [2024-12-06 20:58:13.272978] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:56.184 [2024-12-06 20:58:13.272983] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:56.184 [2024-12-06 20:58:13.272988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:56.184 [2024-12-06 20:58:13.272997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:56.184 [2024-12-06 20:58:13.273002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:56.184 [2024-12-06 20:58:13.273007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:56.184 [2024-12-06 20:58:13.273013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.184 [2024-12-06 20:58:13.273020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:56.184 [2024-12-06 20:58:13.273027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:30:56.184 [2024-12-06 20:58:13.273033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.184 [2024-12-06 20:58:13.282631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.184 [2024-12-06 20:58:13.282655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:56.184 [2024-12-06 20:58:13.282663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.585 ms 00:30:56.184 [2024-12-06 20:58:13.282669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.184 [2024-12-06 20:58:13.282952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.184 [2024-12-06 20:58:13.282963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:56.184 [2024-12-06 20:58:13.282970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:30:56.184 [2024-12-06 20:58:13.282976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.315406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.315509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:56.446 [2024-12-06 20:58:13.315522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.315532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.315555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.315562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:56.446 [2024-12-06 20:58:13.315568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.315573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.315622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.315629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:56.446 [2024-12-06 20:58:13.315636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.315641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.315656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.315662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:56.446 [2024-12-06 20:58:13.315668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.315674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.373585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.373615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:56.446 [2024-12-06 20:58:13.373624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.373631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:56.446 [2024-12-06 20:58:13.421534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:56.446 [2024-12-06 20:58:13.421605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:56.446 [2024-12-06 20:58:13.421676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:56.446 [2024-12-06 20:58:13.421767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:56.446 [2024-12-06 20:58:13.421811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:56.446 [2024-12-06 20:58:13.421857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.421915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.446 [2024-12-06 20:58:13.421925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:56.446 [2024-12-06 20:58:13.421931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.446 [2024-12-06 20:58:13.421937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.446 [2024-12-06 20:58:13.422024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 193.829 ms, result 0 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:57.016 Remove shared memory files 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83057 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:57.016 ************************************ 00:30:57.016 END TEST ftl_upgrade_shutdown 00:30:57.016 ************************************ 00:30:57.016 00:30:57.016 real 1m18.895s 00:30:57.016 user 1m50.279s 00:30:57.016 sys 0m17.906s 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.016 20:58:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:57.016 20:58:14 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:57.016 20:58:14 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:57.016 20:58:14 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:57.016 20:58:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.016 20:58:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:57.016 ************************************ 00:30:57.016 START TEST ftl_restore_fast 00:30:57.016 ************************************ 00:30:57.016 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:57.275 * Looking for test storage... 00:30:57.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.275 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.275 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.275 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:57.275 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:57.275 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:57.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.276 --rc genhtml_branch_coverage=1 00:30:57.276 --rc genhtml_function_coverage=1 00:30:57.276 --rc genhtml_legend=1 00:30:57.276 --rc geninfo_all_blocks=1 00:30:57.276 --rc geninfo_unexecuted_blocks=1 00:30:57.276 00:30:57.276 ' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:57.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.276 --rc genhtml_branch_coverage=1 00:30:57.276 --rc genhtml_function_coverage=1 00:30:57.276 --rc genhtml_legend=1 00:30:57.276 --rc geninfo_all_blocks=1 00:30:57.276 --rc geninfo_unexecuted_blocks=1 00:30:57.276 00:30:57.276 ' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:57.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.276 --rc genhtml_branch_coverage=1 00:30:57.276 --rc genhtml_function_coverage=1 00:30:57.276 --rc genhtml_legend=1 00:30:57.276 --rc geninfo_all_blocks=1 00:30:57.276 --rc geninfo_unexecuted_blocks=1 00:30:57.276 00:30:57.276 ' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:57.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.276 --rc genhtml_branch_coverage=1 00:30:57.276 --rc genhtml_function_coverage=1 00:30:57.276 --rc genhtml_legend=1 00:30:57.276 --rc geninfo_all_blocks=1 00:30:57.276 --rc geninfo_unexecuted_blocks=1 00:30:57.276 00:30:57.276 ' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:57.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NjgOzz93l0 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=83523 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 83523 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 83523 ']' 00:30:57.276 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.277 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.277 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.277 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.277 20:58:14 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:57.277 20:58:14 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.277 [2024-12-06 20:58:14.355237] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:30:57.277 [2024-12-06 20:58:14.355354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83523 ] 00:30:57.537 [2024-12-06 20:58:14.510719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.537 [2024-12-06 20:58:14.585628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:58.109 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:58.369 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:58.630 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:58.630 { 00:30:58.630 "name": "nvme0n1", 00:30:58.630 "aliases": [ 00:30:58.630 "d78d4b07-2b79-4cb3-87c1-92dcad84497c" 00:30:58.630 ], 00:30:58.630 "product_name": "NVMe disk", 00:30:58.630 "block_size": 4096, 00:30:58.630 "num_blocks": 1310720, 00:30:58.630 "uuid": "d78d4b07-2b79-4cb3-87c1-92dcad84497c", 00:30:58.630 "numa_id": -1, 00:30:58.630 "assigned_rate_limits": { 00:30:58.630 "rw_ios_per_sec": 0, 00:30:58.630 "rw_mbytes_per_sec": 0, 00:30:58.630 "r_mbytes_per_sec": 0, 00:30:58.630 "w_mbytes_per_sec": 0 00:30:58.630 }, 00:30:58.630 "claimed": true, 00:30:58.630 "claim_type": "read_many_write_one", 00:30:58.630 "zoned": false, 00:30:58.630 "supported_io_types": { 00:30:58.630 "read": true, 00:30:58.630 "write": true, 00:30:58.630 "unmap": true, 00:30:58.630 "flush": true, 00:30:58.630 "reset": true, 00:30:58.630 "nvme_admin": true, 00:30:58.630 "nvme_io": true, 00:30:58.630 "nvme_io_md": false, 00:30:58.630 "write_zeroes": true, 00:30:58.630 "zcopy": false, 00:30:58.630 "get_zone_info": false, 00:30:58.630 "zone_management": false, 00:30:58.630 "zone_append": false, 00:30:58.630 "compare": true, 00:30:58.630 "compare_and_write": false, 00:30:58.630 "abort": true, 00:30:58.630 "seek_hole": false, 00:30:58.630 "seek_data": false, 00:30:58.630 "copy": true, 00:30:58.630 "nvme_iov_md": false 00:30:58.630 }, 00:30:58.630 "driver_specific": { 00:30:58.630 "nvme": [ 00:30:58.630 { 00:30:58.630 "pci_address": "0000:00:11.0", 00:30:58.630 "trid": { 00:30:58.630 "trtype": "PCIe", 00:30:58.630 "traddr": "0000:00:11.0" 00:30:58.630 }, 00:30:58.630 "ctrlr_data": { 00:30:58.630 "cntlid": 0, 00:30:58.630 "vendor_id": "0x1b36", 00:30:58.630 "model_number": "QEMU NVMe Ctrl", 00:30:58.630 "serial_number": "12341", 00:30:58.630 "firmware_revision": "8.0.0", 00:30:58.630 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:58.630 "oacs": { 00:30:58.630 "security": 0, 00:30:58.630 "format": 1, 00:30:58.630 "firmware": 0, 00:30:58.630 "ns_manage": 1 00:30:58.630 }, 00:30:58.630 "multi_ctrlr": false, 00:30:58.630 "ana_reporting": false 00:30:58.630 }, 00:30:58.630 "vs": { 00:30:58.630 "nvme_version": "1.4" 00:30:58.630 }, 00:30:58.631 "ns_data": { 00:30:58.631 "id": 1, 00:30:58.631 "can_share": false 00:30:58.631 } 00:30:58.631 } 00:30:58.631 ], 00:30:58.631 "mp_policy": "active_passive" 00:30:58.631 } 00:30:58.631 } 00:30:58.631 ]' 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:58.631 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:58.891 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=c47c1300-75f9-488b-8e4d-69a466c7cca7 00:30:58.891 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:58.891 20:58:15 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c47c1300-75f9-488b-8e4d-69a466c7cca7 00:30:59.152 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:59.413 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:59.672 { 00:30:59.672 "name": "fddc04e2-5807-4891-be5b-142d9c2402da", 00:30:59.672 "aliases": [ 00:30:59.672 "lvs/nvme0n1p0" 00:30:59.672 ], 00:30:59.672 "product_name": "Logical Volume", 00:30:59.672 "block_size": 4096, 00:30:59.672 "num_blocks": 26476544, 00:30:59.672 "uuid": "fddc04e2-5807-4891-be5b-142d9c2402da", 00:30:59.672 "assigned_rate_limits": { 00:30:59.672 "rw_ios_per_sec": 0, 00:30:59.672 "rw_mbytes_per_sec": 0, 00:30:59.672 "r_mbytes_per_sec": 0, 00:30:59.672 "w_mbytes_per_sec": 0 00:30:59.672 }, 00:30:59.672 "claimed": false, 00:30:59.672 "zoned": false, 00:30:59.672 "supported_io_types": { 00:30:59.672 "read": true, 00:30:59.672 "write": true, 00:30:59.672 "unmap": true, 00:30:59.672 "flush": false, 00:30:59.672 "reset": true, 00:30:59.672 "nvme_admin": false, 00:30:59.672 "nvme_io": false, 00:30:59.672 "nvme_io_md": false, 00:30:59.672 "write_zeroes": true, 00:30:59.672 "zcopy": false, 00:30:59.672 "get_zone_info": false, 00:30:59.672 "zone_management": false, 00:30:59.672 "zone_append": false, 00:30:59.672 "compare": false, 00:30:59.672 "compare_and_write": false, 00:30:59.672 "abort": false, 00:30:59.672 "seek_hole": true, 00:30:59.672 "seek_data": true, 00:30:59.672 "copy": false, 00:30:59.672 "nvme_iov_md": false 00:30:59.672 }, 00:30:59.672 "driver_specific": { 00:30:59.672 "lvol": { 00:30:59.672 "lvol_store_uuid": "2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2", 00:30:59.672 "base_bdev": "nvme0n1", 00:30:59.672 "thin_provision": true, 00:30:59.672 "num_allocated_clusters": 0, 00:30:59.672 "snapshot": false, 00:30:59.672 "clone": false, 00:30:59.672 "esnap_clone": false 00:30:59.672 } 00:30:59.672 } 00:30:59.672 } 00:30:59.672 ]' 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:59.672 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:59.673 20:58:16 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fddc04e2-5807-4891-be5b-142d9c2402da 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:59.972 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fddc04e2-5807-4891-be5b-142d9c2402da 00:31:00.266 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:00.266 { 00:31:00.266 "name": "fddc04e2-5807-4891-be5b-142d9c2402da", 00:31:00.266 "aliases": [ 00:31:00.266 "lvs/nvme0n1p0" 00:31:00.266 ], 00:31:00.266 "product_name": "Logical Volume", 00:31:00.266 "block_size": 4096, 00:31:00.266 "num_blocks": 26476544, 00:31:00.266 "uuid": "fddc04e2-5807-4891-be5b-142d9c2402da", 00:31:00.266 "assigned_rate_limits": { 00:31:00.266 "rw_ios_per_sec": 0, 00:31:00.266 "rw_mbytes_per_sec": 0, 00:31:00.266 "r_mbytes_per_sec": 0, 00:31:00.266 "w_mbytes_per_sec": 0 00:31:00.266 }, 00:31:00.266 "claimed": false, 00:31:00.266 "zoned": false, 00:31:00.266 "supported_io_types": { 00:31:00.266 "read": true, 00:31:00.266 "write": true, 00:31:00.266 "unmap": true, 00:31:00.266 "flush": false, 00:31:00.266 "reset": true, 00:31:00.266 "nvme_admin": false, 00:31:00.266 "nvme_io": false, 00:31:00.266 "nvme_io_md": false, 00:31:00.266 "write_zeroes": true, 00:31:00.266 "zcopy": false, 00:31:00.266 "get_zone_info": false, 00:31:00.266 "zone_management": false, 00:31:00.266 "zone_append": false, 00:31:00.266 "compare": false, 00:31:00.266 "compare_and_write": false, 00:31:00.266 "abort": false, 00:31:00.266 "seek_hole": true, 00:31:00.266 "seek_data": true, 00:31:00.266 "copy": false, 00:31:00.266 "nvme_iov_md": false 00:31:00.266 }, 00:31:00.266 "driver_specific": { 00:31:00.266 "lvol": { 00:31:00.266 "lvol_store_uuid": "2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2", 00:31:00.266 "base_bdev": "nvme0n1", 00:31:00.266 "thin_provision": true, 00:31:00.266 "num_allocated_clusters": 0, 00:31:00.267 "snapshot": false, 00:31:00.267 "clone": false, 00:31:00.267 "esnap_clone": false 00:31:00.267 } 00:31:00.267 } 00:31:00.267 } 00:31:00.267 ]' 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:00.267 20:58:17 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size fddc04e2-5807-4891-be5b-142d9c2402da 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fddc04e2-5807-4891-be5b-142d9c2402da 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:00.528 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fddc04e2-5807-4891-be5b-142d9c2402da 00:31:00.789 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:00.789 { 00:31:00.789 "name": "fddc04e2-5807-4891-be5b-142d9c2402da", 00:31:00.789 "aliases": [ 00:31:00.789 "lvs/nvme0n1p0" 00:31:00.789 ], 00:31:00.789 "product_name": "Logical Volume", 00:31:00.789 "block_size": 4096, 00:31:00.789 "num_blocks": 26476544, 00:31:00.789 "uuid": "fddc04e2-5807-4891-be5b-142d9c2402da", 00:31:00.789 "assigned_rate_limits": { 00:31:00.789 "rw_ios_per_sec": 0, 00:31:00.789 "rw_mbytes_per_sec": 0, 00:31:00.789 "r_mbytes_per_sec": 0, 00:31:00.789 "w_mbytes_per_sec": 0 00:31:00.789 }, 00:31:00.789 "claimed": false, 00:31:00.789 "zoned": false, 00:31:00.789 "supported_io_types": { 00:31:00.789 "read": true, 00:31:00.789 "write": true, 00:31:00.789 "unmap": true, 00:31:00.789 "flush": false, 00:31:00.789 "reset": true, 00:31:00.789 "nvme_admin": false, 00:31:00.789 "nvme_io": false, 00:31:00.789 "nvme_io_md": false, 00:31:00.789 "write_zeroes": true, 00:31:00.789 "zcopy": false, 00:31:00.789 "get_zone_info": false, 00:31:00.789 "zone_management": false, 00:31:00.789 "zone_append": false, 00:31:00.789 "compare": false, 00:31:00.789 "compare_and_write": false, 00:31:00.789 "abort": false, 00:31:00.789 "seek_hole": true, 00:31:00.789 "seek_data": true, 00:31:00.789 "copy": false, 00:31:00.789 "nvme_iov_md": false 00:31:00.789 }, 00:31:00.789 "driver_specific": { 00:31:00.789 "lvol": { 00:31:00.789 "lvol_store_uuid": "2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2", 00:31:00.789 "base_bdev": "nvme0n1", 00:31:00.789 "thin_provision": true, 00:31:00.789 "num_allocated_clusters": 0, 00:31:00.789 "snapshot": false, 00:31:00.789 "clone": false, 00:31:00.789 "esnap_clone": false 00:31:00.789 } 00:31:00.789 } 00:31:00.789 } 00:31:00.789 ]' 00:31:00.789 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fddc04e2-5807-4891-be5b-142d9c2402da --l2p_dram_limit 10' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:00.790 20:58:17 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fddc04e2-5807-4891-be5b-142d9c2402da --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:01.054 [2024-12-06 20:58:17.936326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.936367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:01.054 [2024-12-06 20:58:17.936380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:01.054 [2024-12-06 20:58:17.936386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.936434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.936442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.054 [2024-12-06 20:58:17.936450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:01.054 [2024-12-06 20:58:17.936456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.936475] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:01.054 [2024-12-06 20:58:17.937046] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:01.054 [2024-12-06 20:58:17.937063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.937069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.054 [2024-12-06 20:58:17.937077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:31:01.054 [2024-12-06 20:58:17.937083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.937108] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 57031464-d590-49dc-928a-15f887881385 00:31:01.054 [2024-12-06 20:58:17.938071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.938095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:01.054 [2024-12-06 20:58:17.938103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:01.054 [2024-12-06 20:58:17.938110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.942976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.943090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.054 [2024-12-06 20:58:17.943102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:31:01.054 [2024-12-06 20:58:17.943110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.943179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.943188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.054 [2024-12-06 20:58:17.943194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:01.054 [2024-12-06 20:58:17.943204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.943243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.943253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:01.054 [2024-12-06 20:58:17.943261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:01.054 [2024-12-06 20:58:17.943268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.943284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:01.054 [2024-12-06 20:58:17.946146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.946240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.054 [2024-12-06 20:58:17.946254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:31:01.054 [2024-12-06 20:58:17.946261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.946292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.946299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:01.054 [2024-12-06 20:58:17.946306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:01.054 [2024-12-06 20:58:17.946311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.946331] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:01.054 [2024-12-06 20:58:17.946440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:01.054 [2024-12-06 20:58:17.946453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:01.054 [2024-12-06 20:58:17.946461] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:01.054 [2024-12-06 20:58:17.946471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946478] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946485] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:01.054 [2024-12-06 20:58:17.946491] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:01.054 [2024-12-06 20:58:17.946501] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:01.054 [2024-12-06 20:58:17.946506] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:01.054 [2024-12-06 20:58:17.946514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.946525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:01.054 [2024-12-06 20:58:17.946532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:31:01.054 [2024-12-06 20:58:17.946537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.946603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.054 [2024-12-06 20:58:17.946610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:01.054 [2024-12-06 20:58:17.946617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:01.054 [2024-12-06 20:58:17.946623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.054 [2024-12-06 20:58:17.946701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:01.054 [2024-12-06 20:58:17.946708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:01.054 [2024-12-06 20:58:17.946716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:01.054 [2024-12-06 20:58:17.946734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:01.054 [2024-12-06 20:58:17.946754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.054 [2024-12-06 20:58:17.946765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:01.054 [2024-12-06 20:58:17.946771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:01.054 [2024-12-06 20:58:17.946778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.054 [2024-12-06 20:58:17.946783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:01.054 [2024-12-06 20:58:17.946790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:01.054 [2024-12-06 20:58:17.946795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:01.054 [2024-12-06 20:58:17.946808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:01.054 [2024-12-06 20:58:17.946826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:01.054 [2024-12-06 20:58:17.946843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:01.054 [2024-12-06 20:58:17.946860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:01.054 [2024-12-06 20:58:17.946876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.054 [2024-12-06 20:58:17.946901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:01.054 [2024-12-06 20:58:17.946910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.054 [2024-12-06 20:58:17.946922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:01.054 [2024-12-06 20:58:17.946928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:01.054 [2024-12-06 20:58:17.946934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.054 [2024-12-06 20:58:17.946939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:01.054 [2024-12-06 20:58:17.946946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:01.054 [2024-12-06 20:58:17.946951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.054 [2024-12-06 20:58:17.946957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:01.054 [2024-12-06 20:58:17.946962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:01.054 [2024-12-06 20:58:17.946969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.055 [2024-12-06 20:58:17.946974] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:01.055 [2024-12-06 20:58:17.946981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:01.055 [2024-12-06 20:58:17.946986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.055 [2024-12-06 20:58:17.946993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.055 [2024-12-06 20:58:17.947000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:01.055 [2024-12-06 20:58:17.947008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:01.055 [2024-12-06 20:58:17.947013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:01.055 [2024-12-06 20:58:17.947020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:01.055 [2024-12-06 20:58:17.947025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:01.055 [2024-12-06 20:58:17.947032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:01.055 [2024-12-06 20:58:17.947038] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:01.055 [2024-12-06 20:58:17.947047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:01.055 [2024-12-06 20:58:17.947061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:01.055 [2024-12-06 20:58:17.947066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:01.055 [2024-12-06 20:58:17.947073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:01.055 [2024-12-06 20:58:17.947078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:01.055 [2024-12-06 20:58:17.947086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:01.055 [2024-12-06 20:58:17.947091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:01.055 [2024-12-06 20:58:17.947098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:01.055 [2024-12-06 20:58:17.947103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:01.055 [2024-12-06 20:58:17.947112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:01.055 [2024-12-06 20:58:17.947141] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:01.055 [2024-12-06 20:58:17.947149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:01.055 [2024-12-06 20:58:17.947162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:01.055 [2024-12-06 20:58:17.947167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:01.055 [2024-12-06 20:58:17.947174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:01.055 [2024-12-06 20:58:17.947179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.055 [2024-12-06 20:58:17.947186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:01.055 [2024-12-06 20:58:17.947192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:31:01.055 [2024-12-06 20:58:17.947199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.055 [2024-12-06 20:58:17.947240] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:01.055 [2024-12-06 20:58:17.947251] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:04.356 [2024-12-06 20:58:21.111082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.111176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:04.356 [2024-12-06 20:58:21.111195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3163.829 ms 00:31:04.356 [2024-12-06 20:58:21.111207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.142447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.142709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:04.356 [2024-12-06 20:58:21.142731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.996 ms 00:31:04.356 [2024-12-06 20:58:21.142743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.142916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.142932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:04.356 [2024-12-06 20:58:21.142942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:31:04.356 [2024-12-06 20:58:21.142959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.178126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.178175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:04.356 [2024-12-06 20:58:21.178187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.127 ms 00:31:04.356 [2024-12-06 20:58:21.178198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.178232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.178247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:04.356 [2024-12-06 20:58:21.178256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:04.356 [2024-12-06 20:58:21.178274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.178841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.178869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:04.356 [2024-12-06 20:58:21.178879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:31:04.356 [2024-12-06 20:58:21.178921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.179039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.179052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:04.356 [2024-12-06 20:58:21.179064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:04.356 [2024-12-06 20:58:21.179077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.196215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.196263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:04.356 [2024-12-06 20:58:21.196274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.119 ms 00:31:04.356 [2024-12-06 20:58:21.196284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.223926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:04.356 [2024-12-06 20:58:21.227726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.227771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:04.356 [2024-12-06 20:58:21.227787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.333 ms 00:31:04.356 [2024-12-06 20:58:21.227795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.325068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.325137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:04.356 [2024-12-06 20:58:21.325155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.223 ms 00:31:04.356 [2024-12-06 20:58:21.325165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.325374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.325390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:04.356 [2024-12-06 20:58:21.325406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:31:04.356 [2024-12-06 20:58:21.325414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.352105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.352155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:04.356 [2024-12-06 20:58:21.352171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.633 ms 00:31:04.356 [2024-12-06 20:58:21.352181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.377310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.377356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:04.356 [2024-12-06 20:58:21.377371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.072 ms 00:31:04.356 [2024-12-06 20:58:21.377379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.378020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.378039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:04.356 [2024-12-06 20:58:21.378051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:31:04.356 [2024-12-06 20:58:21.378061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.356 [2024-12-06 20:58:21.464335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.356 [2024-12-06 20:58:21.464402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:04.356 [2024-12-06 20:58:21.464422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.228 ms 00:31:04.356 [2024-12-06 20:58:21.464432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.616 [2024-12-06 20:58:21.491074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.616 [2024-12-06 20:58:21.491122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:04.616 [2024-12-06 20:58:21.491137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.559 ms 00:31:04.616 [2024-12-06 20:58:21.491146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.616 [2024-12-06 20:58:21.516796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.616 [2024-12-06 20:58:21.516843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:04.616 [2024-12-06 20:58:21.516858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.597 ms 00:31:04.616 [2024-12-06 20:58:21.516867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.616 [2024-12-06 20:58:21.542807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.616 [2024-12-06 20:58:21.543029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:04.616 [2024-12-06 20:58:21.543056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.867 ms 00:31:04.616 [2024-12-06 20:58:21.543064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.616 [2024-12-06 20:58:21.543140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.616 [2024-12-06 20:58:21.543151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:04.616 [2024-12-06 20:58:21.543167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:04.616 [2024-12-06 20:58:21.543176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.616 [2024-12-06 20:58:21.543268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.616 [2024-12-06 20:58:21.543281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:04.616 [2024-12-06 20:58:21.543291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:04.616 [2024-12-06 20:58:21.543300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.616 [2024-12-06 20:58:21.544465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3607.617 ms, result 0 00:31:04.616 { 00:31:04.616 "name": "ftl0", 00:31:04.616 "uuid": "57031464-d590-49dc-928a-15f887881385" 00:31:04.616 } 00:31:04.616 20:58:21 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:04.616 20:58:21 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:04.878 20:58:21 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:04.878 20:58:21 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:04.878 [2024-12-06 20:58:21.967823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.878 [2024-12-06 20:58:21.967913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:04.878 [2024-12-06 20:58:21.967929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:04.878 [2024-12-06 20:58:21.967940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.878 [2024-12-06 20:58:21.967965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:04.878 [2024-12-06 20:58:21.970984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.878 [2024-12-06 20:58:21.971027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:04.878 [2024-12-06 20:58:21.971040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:31:04.878 [2024-12-06 20:58:21.971050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.878 [2024-12-06 20:58:21.971323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.878 [2024-12-06 20:58:21.971337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:04.878 [2024-12-06 20:58:21.971349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:31:04.878 [2024-12-06 20:58:21.971357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.878 [2024-12-06 20:58:21.974611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.878 [2024-12-06 20:58:21.974760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:04.878 [2024-12-06 20:58:21.974779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:31:04.878 [2024-12-06 20:58:21.974788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.878 [2024-12-06 20:58:21.981027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.878 [2024-12-06 20:58:21.981186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:04.878 [2024-12-06 20:58:21.981213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:31:04.878 [2024-12-06 20:58:21.981221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.879 [2024-12-06 20:58:22.008006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.879 [2024-12-06 20:58:22.008182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:04.879 [2024-12-06 20:58:22.008207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.700 ms 00:31:04.879 [2024-12-06 20:58:22.008215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.140 [2024-12-06 20:58:22.025147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.140 [2024-12-06 20:58:22.025196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:05.140 [2024-12-06 20:58:22.025214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.883 ms 00:31:05.140 [2024-12-06 20:58:22.025223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.140 [2024-12-06 20:58:22.025395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.140 [2024-12-06 20:58:22.025408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:05.140 [2024-12-06 20:58:22.025420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:31:05.140 [2024-12-06 20:58:22.025428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.140 [2024-12-06 20:58:22.050826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.140 [2024-12-06 20:58:22.050872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:05.140 [2024-12-06 20:58:22.050902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.373 ms 00:31:05.140 [2024-12-06 20:58:22.050911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.140 [2024-12-06 20:58:22.075851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.140 [2024-12-06 20:58:22.075915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:05.140 [2024-12-06 20:58:22.075930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.890 ms 00:31:05.140 [2024-12-06 20:58:22.075937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.140 [2024-12-06 20:58:22.100246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.140 [2024-12-06 20:58:22.100290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:05.140 [2024-12-06 20:58:22.100304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.256 ms 00:31:05.140 [2024-12-06 20:58:22.100312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.140 [2024-12-06 20:58:22.124464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.140 [2024-12-06 20:58:22.124507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:05.140 [2024-12-06 20:58:22.124520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.057 ms 00:31:05.141 [2024-12-06 20:58:22.124527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.141 [2024-12-06 20:58:22.124573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:05.141 [2024-12-06 20:58:22.124589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.124991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:05.141 [2024-12-06 20:58:22.125414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:05.142 [2024-12-06 20:58:22.125548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:05.142 [2024-12-06 20:58:22.125558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 57031464-d590-49dc-928a-15f887881385 00:31:05.142 [2024-12-06 20:58:22.125566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:05.142 [2024-12-06 20:58:22.125578] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:05.142 [2024-12-06 20:58:22.125588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:05.142 [2024-12-06 20:58:22.125598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:05.142 [2024-12-06 20:58:22.125605] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:05.142 [2024-12-06 20:58:22.125616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:05.142 [2024-12-06 20:58:22.125623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:05.142 [2024-12-06 20:58:22.125633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:05.142 [2024-12-06 20:58:22.125640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:05.142 [2024-12-06 20:58:22.125649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.142 [2024-12-06 20:58:22.125657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:05.142 [2024-12-06 20:58:22.125668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:31:05.142 [2024-12-06 20:58:22.125678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.138926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.142 [2024-12-06 20:58:22.138964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:05.142 [2024-12-06 20:58:22.138978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.169 ms 00:31:05.142 [2024-12-06 20:58:22.138987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.139396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.142 [2024-12-06 20:58:22.139412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:05.142 [2024-12-06 20:58:22.139427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:31:05.142 [2024-12-06 20:58:22.139435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.185555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.142 [2024-12-06 20:58:22.185602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:05.142 [2024-12-06 20:58:22.185616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.142 [2024-12-06 20:58:22.185625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.185689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.142 [2024-12-06 20:58:22.185698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:05.142 [2024-12-06 20:58:22.185712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.142 [2024-12-06 20:58:22.185720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.185799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.142 [2024-12-06 20:58:22.185810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:05.142 [2024-12-06 20:58:22.185821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.142 [2024-12-06 20:58:22.185829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.185852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.142 [2024-12-06 20:58:22.185860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:05.142 [2024-12-06 20:58:22.185870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.142 [2024-12-06 20:58:22.185879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.142 [2024-12-06 20:58:22.269344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.142 [2024-12-06 20:58:22.269579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:05.142 [2024-12-06 20:58:22.269606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.142 [2024-12-06 20:58:22.269614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:05.402 [2024-12-06 20:58:22.337256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.402 [2024-12-06 20:58:22.337401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.402 [2024-12-06 20:58:22.337483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.402 [2024-12-06 20:58:22.337622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:05.402 [2024-12-06 20:58:22.337688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.402 [2024-12-06 20:58:22.337762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.337825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.402 [2024-12-06 20:58:22.337836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.402 [2024-12-06 20:58:22.337848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.402 [2024-12-06 20:58:22.337856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.402 [2024-12-06 20:58:22.338038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.179 ms, result 0 00:31:05.402 true 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 83523 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83523 ']' 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83523 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83523 00:31:05.402 killing process with pid 83523 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83523' 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 83523 00:31:05.402 20:58:22 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 83523 00:31:11.999 20:58:28 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:16.210 262144+0 records in 00:31:16.211 262144+0 records out 00:31:16.211 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.99504 s, 269 MB/s 00:31:16.211 20:58:32 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:18.154 20:58:35 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:18.154 [2024-12-06 20:58:35.149093] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:31:18.154 [2024-12-06 20:58:35.149233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83752 ] 00:31:18.417 [2024-12-06 20:58:35.313760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.417 [2024-12-06 20:58:35.441747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.678 [2024-12-06 20:58:35.738857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.678 [2024-12-06 20:58:35.738964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.941 [2024-12-06 20:58:35.900610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.900679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:18.941 [2024-12-06 20:58:35.900694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:18.941 [2024-12-06 20:58:35.900703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.900760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.900774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.941 [2024-12-06 20:58:35.900783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:18.941 [2024-12-06 20:58:35.900791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.900813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:18.941 [2024-12-06 20:58:35.901608] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:18.941 [2024-12-06 20:58:35.901629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.901638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.941 [2024-12-06 20:58:35.901647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:31:18.941 [2024-12-06 20:58:35.901655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.903431] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:18.941 [2024-12-06 20:58:35.917921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.917984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:18.941 [2024-12-06 20:58:35.917999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.492 ms 00:31:18.941 [2024-12-06 20:58:35.918007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.918102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.918113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:18.941 [2024-12-06 20:58:35.918123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:18.941 [2024-12-06 20:58:35.918131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.926650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.926695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.941 [2024-12-06 20:58:35.926707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.436 ms 00:31:18.941 [2024-12-06 20:58:35.926722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.926807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.926818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.941 [2024-12-06 20:58:35.926827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:18.941 [2024-12-06 20:58:35.926836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.926884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.926921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:18.941 [2024-12-06 20:58:35.926931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:18.941 [2024-12-06 20:58:35.926938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.926965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:18.941 [2024-12-06 20:58:35.931048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.931090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.941 [2024-12-06 20:58:35.931105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:31:18.941 [2024-12-06 20:58:35.931113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.931155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.931164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:18.941 [2024-12-06 20:58:35.931173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:18.941 [2024-12-06 20:58:35.931181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.931236] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:18.941 [2024-12-06 20:58:35.931262] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:18.941 [2024-12-06 20:58:35.931300] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:18.941 [2024-12-06 20:58:35.931320] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:18.941 [2024-12-06 20:58:35.931428] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:18.941 [2024-12-06 20:58:35.931439] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:18.941 [2024-12-06 20:58:35.931451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:18.941 [2024-12-06 20:58:35.931462] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:18.941 [2024-12-06 20:58:35.931471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:18.941 [2024-12-06 20:58:35.931479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:18.941 [2024-12-06 20:58:35.931488] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:18.941 [2024-12-06 20:58:35.931499] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:18.941 [2024-12-06 20:58:35.931508] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:18.941 [2024-12-06 20:58:35.931517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.931525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:18.941 [2024-12-06 20:58:35.931533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:31:18.941 [2024-12-06 20:58:35.931540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.931624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.941 [2024-12-06 20:58:35.931633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:18.941 [2024-12-06 20:58:35.931641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:18.941 [2024-12-06 20:58:35.931649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.941 [2024-12-06 20:58:35.931756] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:18.941 [2024-12-06 20:58:35.931768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:18.941 [2024-12-06 20:58:35.931776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.941 [2024-12-06 20:58:35.931785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.941 [2024-12-06 20:58:35.931793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:18.941 [2024-12-06 20:58:35.931801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:18.941 [2024-12-06 20:58:35.931808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:18.941 [2024-12-06 20:58:35.931816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:18.941 [2024-12-06 20:58:35.931823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:18.941 [2024-12-06 20:58:35.931830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.941 [2024-12-06 20:58:35.931837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:18.941 [2024-12-06 20:58:35.931845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:18.941 [2024-12-06 20:58:35.931852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.941 [2024-12-06 20:58:35.931866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:18.941 [2024-12-06 20:58:35.931873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:18.941 [2024-12-06 20:58:35.931879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.941 [2024-12-06 20:58:35.931912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:18.941 [2024-12-06 20:58:35.931920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:18.941 [2024-12-06 20:58:35.931929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.941 [2024-12-06 20:58:35.931937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:18.941 [2024-12-06 20:58:35.931944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:18.941 [2024-12-06 20:58:35.931952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.942 [2024-12-06 20:58:35.931959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:18.942 [2024-12-06 20:58:35.931966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:18.942 [2024-12-06 20:58:35.931974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.942 [2024-12-06 20:58:35.931981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:18.942 [2024-12-06 20:58:35.931988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:18.942 [2024-12-06 20:58:35.931995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.942 [2024-12-06 20:58:35.932002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:18.942 [2024-12-06 20:58:35.932009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:18.942 [2024-12-06 20:58:35.932016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.942 [2024-12-06 20:58:35.932023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:18.942 [2024-12-06 20:58:35.932030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:18.942 [2024-12-06 20:58:35.932038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.942 [2024-12-06 20:58:35.932131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:18.942 [2024-12-06 20:58:35.932138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:18.942 [2024-12-06 20:58:35.932146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.942 [2024-12-06 20:58:35.932153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:18.942 [2024-12-06 20:58:35.932161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:18.942 [2024-12-06 20:58:35.932169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.942 [2024-12-06 20:58:35.932175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:18.942 [2024-12-06 20:58:35.932182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:18.942 [2024-12-06 20:58:35.932190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.942 [2024-12-06 20:58:35.932197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:18.942 [2024-12-06 20:58:35.932205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:18.942 [2024-12-06 20:58:35.932213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.942 [2024-12-06 20:58:35.932222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.942 [2024-12-06 20:58:35.932229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:18.942 [2024-12-06 20:58:35.932237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:18.942 [2024-12-06 20:58:35.932244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:18.942 [2024-12-06 20:58:35.932254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:18.942 [2024-12-06 20:58:35.932261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:18.942 [2024-12-06 20:58:35.932269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:18.942 [2024-12-06 20:58:35.932278] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:18.942 [2024-12-06 20:58:35.932288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:18.942 [2024-12-06 20:58:35.932307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:18.942 [2024-12-06 20:58:35.932323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:18.942 [2024-12-06 20:58:35.932331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:18.942 [2024-12-06 20:58:35.932338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:18.942 [2024-12-06 20:58:35.932345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:18.942 [2024-12-06 20:58:35.932352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:18.942 [2024-12-06 20:58:35.932360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:18.942 [2024-12-06 20:58:35.932367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:18.942 [2024-12-06 20:58:35.932373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:18.942 [2024-12-06 20:58:35.932408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:18.942 [2024-12-06 20:58:35.932416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:18.942 [2024-12-06 20:58:35.932432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:18.942 [2024-12-06 20:58:35.932439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:18.942 [2024-12-06 20:58:35.932446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:18.942 [2024-12-06 20:58:35.932454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:35.932462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:18.942 [2024-12-06 20:58:35.932470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:31:18.942 [2024-12-06 20:58:35.932478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:35.965109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:35.965161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.942 [2024-12-06 20:58:35.965175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.582 ms 00:31:18.942 [2024-12-06 20:58:35.965189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:35.965284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:35.965294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:18.942 [2024-12-06 20:58:35.965302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:18.942 [2024-12-06 20:58:35.965310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.010355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.010411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.942 [2024-12-06 20:58:36.010424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.982 ms 00:31:18.942 [2024-12-06 20:58:36.010433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.010484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.010494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.942 [2024-12-06 20:58:36.010508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:18.942 [2024-12-06 20:58:36.010516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.011148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.011174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.942 [2024-12-06 20:58:36.011184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:31:18.942 [2024-12-06 20:58:36.011193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.011362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.011373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.942 [2024-12-06 20:58:36.011389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:31:18.942 [2024-12-06 20:58:36.011397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.027339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.027390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.942 [2024-12-06 20:58:36.027402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.922 ms 00:31:18.942 [2024-12-06 20:58:36.027411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.042112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:18.942 [2024-12-06 20:58:36.042164] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:18.942 [2024-12-06 20:58:36.042179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.042188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:18.942 [2024-12-06 20:58:36.042198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.655 ms 00:31:18.942 [2024-12-06 20:58:36.042207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.942 [2024-12-06 20:58:36.068559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.942 [2024-12-06 20:58:36.068779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:18.942 [2024-12-06 20:58:36.068803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.294 ms 00:31:18.942 [2024-12-06 20:58:36.068812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.082153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.082201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:19.204 [2024-12-06 20:58:36.082213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.289 ms 00:31:19.204 [2024-12-06 20:58:36.082221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.095339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.095530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.204 [2024-12-06 20:58:36.095552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.065 ms 00:31:19.204 [2024-12-06 20:58:36.095560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.096366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.096405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.204 [2024-12-06 20:58:36.096417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:31:19.204 [2024-12-06 20:58:36.096428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.162283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.162531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:19.204 [2024-12-06 20:58:36.162556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.831 ms 00:31:19.204 [2024-12-06 20:58:36.162575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.174287] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:19.204 [2024-12-06 20:58:36.177503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.177709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.204 [2024-12-06 20:58:36.177731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.853 ms 00:31:19.204 [2024-12-06 20:58:36.177741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.177847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.177861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:19.204 [2024-12-06 20:58:36.177871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:19.204 [2024-12-06 20:58:36.177880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.177998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.178011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.204 [2024-12-06 20:58:36.178021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:19.204 [2024-12-06 20:58:36.178029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.178055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.178065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.204 [2024-12-06 20:58:36.178074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:19.204 [2024-12-06 20:58:36.178082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.178116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:19.204 [2024-12-06 20:58:36.178129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.178138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:19.204 [2024-12-06 20:58:36.178147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:19.204 [2024-12-06 20:58:36.178155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.205358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.205411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.204 [2024-12-06 20:58:36.205426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.179 ms 00:31:19.204 [2024-12-06 20:58:36.205441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.205534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.204 [2024-12-06 20:58:36.205544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.204 [2024-12-06 20:58:36.205554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:19.204 [2024-12-06 20:58:36.205562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.204 [2024-12-06 20:58:36.206866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.763 ms, result 0 00:31:20.145  [2024-12-06T20:58:38.273Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-06T20:58:39.658Z] Copying: 34/1024 [MB] (17 MBps) [2024-12-06T20:58:40.234Z] Copying: 54/1024 [MB] (19 MBps) [2024-12-06T20:58:41.624Z] Copying: 64/1024 [MB] (10 MBps) [2024-12-06T20:58:42.570Z] Copying: 80/1024 [MB] (16 MBps) [2024-12-06T20:58:43.516Z] Copying: 95/1024 [MB] (15 MBps) [2024-12-06T20:58:44.460Z] Copying: 110/1024 [MB] (14 MBps) [2024-12-06T20:58:45.404Z] Copying: 121/1024 [MB] (10 MBps) [2024-12-06T20:58:46.343Z] Copying: 137/1024 [MB] (16 MBps) [2024-12-06T20:58:47.283Z] Copying: 147/1024 [MB] (10 MBps) [2024-12-06T20:58:48.224Z] Copying: 158/1024 [MB] (10 MBps) [2024-12-06T20:58:49.224Z] Copying: 168/1024 [MB] (10 MBps) [2024-12-06T20:58:50.613Z] Copying: 181/1024 [MB] (12 MBps) [2024-12-06T20:58:51.558Z] Copying: 203/1024 [MB] (22 MBps) [2024-12-06T20:58:52.503Z] Copying: 222/1024 [MB] (19 MBps) [2024-12-06T20:58:53.448Z] Copying: 239/1024 [MB] (16 MBps) [2024-12-06T20:58:54.391Z] Copying: 257/1024 [MB] (17 MBps) [2024-12-06T20:58:55.332Z] Copying: 280/1024 [MB] (23 MBps) [2024-12-06T20:58:56.271Z] Copying: 317/1024 [MB] (36 MBps) [2024-12-06T20:58:57.659Z] Copying: 334/1024 [MB] (16 MBps) [2024-12-06T20:58:58.230Z] Copying: 354/1024 [MB] (19 MBps) [2024-12-06T20:58:59.633Z] Copying: 368/1024 [MB] (14 MBps) [2024-12-06T20:59:00.578Z] Copying: 384/1024 [MB] (16 MBps) [2024-12-06T20:59:01.523Z] Copying: 403/1024 [MB] (18 MBps) [2024-12-06T20:59:02.467Z] Copying: 416/1024 [MB] (13 MBps) [2024-12-06T20:59:03.411Z] Copying: 428/1024 [MB] (12 MBps) [2024-12-06T20:59:04.406Z] Copying: 440/1024 [MB] (11 MBps) [2024-12-06T20:59:05.352Z] Copying: 458/1024 [MB] (17 MBps) [2024-12-06T20:59:06.293Z] Copying: 480/1024 [MB] (21 MBps) [2024-12-06T20:59:07.234Z] Copying: 497/1024 [MB] (17 MBps) [2024-12-06T20:59:08.623Z] Copying: 532/1024 [MB] (35 MBps) [2024-12-06T20:59:09.570Z] Copying: 555/1024 [MB] (22 MBps) [2024-12-06T20:59:10.514Z] Copying: 575/1024 [MB] (19 MBps) [2024-12-06T20:59:11.456Z] Copying: 602/1024 [MB] (27 MBps) [2024-12-06T20:59:12.401Z] Copying: 622/1024 [MB] (20 MBps) [2024-12-06T20:59:13.349Z] Copying: 639/1024 [MB] (16 MBps) [2024-12-06T20:59:14.296Z] Copying: 654/1024 [MB] (15 MBps) [2024-12-06T20:59:15.240Z] Copying: 671/1024 [MB] (16 MBps) [2024-12-06T20:59:16.624Z] Copying: 691/1024 [MB] (20 MBps) [2024-12-06T20:59:17.566Z] Copying: 711/1024 [MB] (20 MBps) [2024-12-06T20:59:18.531Z] Copying: 730/1024 [MB] (18 MBps) [2024-12-06T20:59:19.475Z] Copying: 758/1024 [MB] (28 MBps) [2024-12-06T20:59:20.416Z] Copying: 777/1024 [MB] (19 MBps) [2024-12-06T20:59:21.359Z] Copying: 800/1024 [MB] (23 MBps) [2024-12-06T20:59:22.297Z] Copying: 811/1024 [MB] (10 MBps) [2024-12-06T20:59:23.234Z] Copying: 825/1024 [MB] (13 MBps) [2024-12-06T20:59:24.621Z] Copying: 836/1024 [MB] (11 MBps) [2024-12-06T20:59:25.586Z] Copying: 846/1024 [MB] (10 MBps) [2024-12-06T20:59:26.530Z] Copying: 856/1024 [MB] (10 MBps) [2024-12-06T20:59:27.481Z] Copying: 867/1024 [MB] (10 MBps) [2024-12-06T20:59:28.423Z] Copying: 877/1024 [MB] (10 MBps) [2024-12-06T20:59:29.363Z] Copying: 887/1024 [MB] (10 MBps) [2024-12-06T20:59:30.303Z] Copying: 898/1024 [MB] (10 MBps) [2024-12-06T20:59:31.248Z] Copying: 929752/1048576 [kB] (10072 kBps) [2024-12-06T20:59:32.654Z] Copying: 918/1024 [MB] (10 MBps) [2024-12-06T20:59:33.228Z] Copying: 928/1024 [MB] (10 MBps) [2024-12-06T20:59:34.616Z] Copying: 953/1024 [MB] (24 MBps) [2024-12-06T20:59:35.560Z] Copying: 964/1024 [MB] (10 MBps) [2024-12-06T20:59:36.499Z] Copying: 980/1024 [MB] (16 MBps) [2024-12-06T20:59:37.444Z] Copying: 997/1024 [MB] (17 MBps) [2024-12-06T20:59:38.390Z] Copying: 1008/1024 [MB] (10 MBps) [2024-12-06T20:59:38.965Z] Copying: 1018/1024 [MB] (10 MBps) [2024-12-06T20:59:38.965Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 20:59:38.793993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.832 [2024-12-06 20:59:38.794048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:21.832 [2024-12-06 20:59:38.794065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:21.832 [2024-12-06 20:59:38.794074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.832 [2024-12-06 20:59:38.794096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:21.832 [2024-12-06 20:59:38.797144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.832 [2024-12-06 20:59:38.797188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:21.832 [2024-12-06 20:59:38.797207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:32:21.832 [2024-12-06 20:59:38.797216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.832 [2024-12-06 20:59:38.800052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.832 [2024-12-06 20:59:38.800238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:21.833 [2024-12-06 20:59:38.800260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:32:21.833 [2024-12-06 20:59:38.800270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.833 [2024-12-06 20:59:38.800305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.833 [2024-12-06 20:59:38.800314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:21.833 [2024-12-06 20:59:38.800323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:21.833 [2024-12-06 20:59:38.800331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.833 [2024-12-06 20:59:38.800392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.833 [2024-12-06 20:59:38.800402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:21.833 [2024-12-06 20:59:38.800411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:21.833 [2024-12-06 20:59:38.800418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.833 [2024-12-06 20:59:38.800432] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:21.833 [2024-12-06 20:59:38.800446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.800995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:21.833 [2024-12-06 20:59:38.801231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:21.834 [2024-12-06 20:59:38.801412] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:21.834 [2024-12-06 20:59:38.801420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 57031464-d590-49dc-928a-15f887881385 00:32:21.834 [2024-12-06 20:59:38.801428] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:21.834 [2024-12-06 20:59:38.801435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:21.834 [2024-12-06 20:59:38.801443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:21.834 [2024-12-06 20:59:38.801454] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:21.834 [2024-12-06 20:59:38.801461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:21.834 [2024-12-06 20:59:38.801468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:21.834 [2024-12-06 20:59:38.801476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:21.834 [2024-12-06 20:59:38.801482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:21.834 [2024-12-06 20:59:38.801488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:21.834 [2024-12-06 20:59:38.801495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.834 [2024-12-06 20:59:38.801503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:21.834 [2024-12-06 20:59:38.801511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:32:21.834 [2024-12-06 20:59:38.801519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.815156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.834 [2024-12-06 20:59:38.815321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:21.834 [2024-12-06 20:59:38.815340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:32:21.834 [2024-12-06 20:59:38.815347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.815732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.834 [2024-12-06 20:59:38.815747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:21.834 [2024-12-06 20:59:38.815757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:32:21.834 [2024-12-06 20:59:38.815764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.852087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.834 [2024-12-06 20:59:38.852137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:21.834 [2024-12-06 20:59:38.852149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.834 [2024-12-06 20:59:38.852158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.852229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.834 [2024-12-06 20:59:38.852239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:21.834 [2024-12-06 20:59:38.852249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.834 [2024-12-06 20:59:38.852258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.852315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.834 [2024-12-06 20:59:38.852331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:21.834 [2024-12-06 20:59:38.852341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.834 [2024-12-06 20:59:38.852349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.852366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.834 [2024-12-06 20:59:38.852375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:21.834 [2024-12-06 20:59:38.852389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.834 [2024-12-06 20:59:38.852398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.834 [2024-12-06 20:59:38.935831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.834 [2024-12-06 20:59:38.935920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:21.834 [2024-12-06 20:59:38.935935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.834 [2024-12-06 20:59:38.935944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.096 [2024-12-06 20:59:39.004118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:22.097 [2024-12-06 20:59:39.004188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:22.097 [2024-12-06 20:59:39.004298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:22.097 [2024-12-06 20:59:39.004363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:22.097 [2024-12-06 20:59:39.004483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:22.097 [2024-12-06 20:59:39.004540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:22.097 [2024-12-06 20:59:39.004607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.097 [2024-12-06 20:59:39.004676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:22.097 [2024-12-06 20:59:39.004685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.097 [2024-12-06 20:59:39.004693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.097 [2024-12-06 20:59:39.004825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 210.797 ms, result 0 00:32:23.487 00:32:23.487 00:32:23.487 20:59:40 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:23.487 [2024-12-06 20:59:40.271071] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:32:23.488 [2024-12-06 20:59:40.271223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84423 ] 00:32:23.488 [2024-12-06 20:59:40.435405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.488 [2024-12-06 20:59:40.555203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.749 [2024-12-06 20:59:40.851238] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:23.749 [2024-12-06 20:59:40.851325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:24.011 [2024-12-06 20:59:41.013076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.011 [2024-12-06 20:59:41.013336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:24.011 [2024-12-06 20:59:41.013366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.011 [2024-12-06 20:59:41.013376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.011 [2024-12-06 20:59:41.013454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.011 [2024-12-06 20:59:41.013469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.011 [2024-12-06 20:59:41.013479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:24.011 [2024-12-06 20:59:41.013487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.011 [2024-12-06 20:59:41.013510] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:24.011 [2024-12-06 20:59:41.014269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:24.011 [2024-12-06 20:59:41.014290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.011 [2024-12-06 20:59:41.014298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.011 [2024-12-06 20:59:41.014308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:32:24.011 [2024-12-06 20:59:41.014316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.011 [2024-12-06 20:59:41.014608] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:24.011 [2024-12-06 20:59:41.014642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.011 [2024-12-06 20:59:41.014654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:24.011 [2024-12-06 20:59:41.014665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:24.011 [2024-12-06 20:59:41.014673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.011 [2024-12-06 20:59:41.014728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.011 [2024-12-06 20:59:41.014738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:24.011 [2024-12-06 20:59:41.014747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:24.011 [2024-12-06 20:59:41.014754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.011 [2024-12-06 20:59:41.015096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.011 [2024-12-06 20:59:41.015110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.011 [2024-12-06 20:59:41.015119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:24.012 [2024-12-06 20:59:41.015127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.015198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.012 [2024-12-06 20:59:41.015208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.012 [2024-12-06 20:59:41.015217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:24.012 [2024-12-06 20:59:41.015225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.015252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.012 [2024-12-06 20:59:41.015265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:24.012 [2024-12-06 20:59:41.015282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:24.012 [2024-12-06 20:59:41.015293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.015326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:24.012 [2024-12-06 20:59:41.019800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.012 [2024-12-06 20:59:41.019847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.012 [2024-12-06 20:59:41.019858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.481 ms 00:32:24.012 [2024-12-06 20:59:41.019866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.019931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.012 [2024-12-06 20:59:41.019942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:24.012 [2024-12-06 20:59:41.019950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:24.012 [2024-12-06 20:59:41.019958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.020019] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:24.012 [2024-12-06 20:59:41.020055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:24.012 [2024-12-06 20:59:41.020095] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:24.012 [2024-12-06 20:59:41.020112] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:24.012 [2024-12-06 20:59:41.020220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:24.012 [2024-12-06 20:59:41.020231] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:24.012 [2024-12-06 20:59:41.020243] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:24.012 [2024-12-06 20:59:41.020254] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020263] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020275] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:24.012 [2024-12-06 20:59:41.020283] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:24.012 [2024-12-06 20:59:41.020291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:24.012 [2024-12-06 20:59:41.020299] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:24.012 [2024-12-06 20:59:41.020307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.012 [2024-12-06 20:59:41.020315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:24.012 [2024-12-06 20:59:41.020323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:32:24.012 [2024-12-06 20:59:41.020331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.020417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.012 [2024-12-06 20:59:41.020426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:24.012 [2024-12-06 20:59:41.020434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:24.012 [2024-12-06 20:59:41.020445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.012 [2024-12-06 20:59:41.020547] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:24.012 [2024-12-06 20:59:41.020558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:24.012 [2024-12-06 20:59:41.020567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:24.012 [2024-12-06 20:59:41.020592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:24.012 [2024-12-06 20:59:41.020616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.012 [2024-12-06 20:59:41.020630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:24.012 [2024-12-06 20:59:41.020637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:24.012 [2024-12-06 20:59:41.020644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.012 [2024-12-06 20:59:41.020651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:24.012 [2024-12-06 20:59:41.020658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:24.012 [2024-12-06 20:59:41.020671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:24.012 [2024-12-06 20:59:41.020686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:24.012 [2024-12-06 20:59:41.020708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:24.012 [2024-12-06 20:59:41.020730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:24.012 [2024-12-06 20:59:41.020751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:24.012 [2024-12-06 20:59:41.020771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.012 [2024-12-06 20:59:41.020784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:24.012 [2024-12-06 20:59:41.020790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.012 [2024-12-06 20:59:41.020804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:24.012 [2024-12-06 20:59:41.020811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:24.012 [2024-12-06 20:59:41.020817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.012 [2024-12-06 20:59:41.020824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:24.012 [2024-12-06 20:59:41.020831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:24.012 [2024-12-06 20:59:41.020838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:24.012 [2024-12-06 20:59:41.020852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:24.012 [2024-12-06 20:59:41.020859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.020866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:24.012 [2024-12-06 20:59:41.020875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:24.012 [2024-12-06 20:59:41.020883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.012 [2024-12-06 20:59:41.021200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.012 [2024-12-06 20:59:41.021243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:24.012 [2024-12-06 20:59:41.021264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:24.012 [2024-12-06 20:59:41.021283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:24.012 [2024-12-06 20:59:41.021302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:24.012 [2024-12-06 20:59:41.021321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:24.012 [2024-12-06 20:59:41.021341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:24.012 [2024-12-06 20:59:41.021446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:24.012 [2024-12-06 20:59:41.021487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.012 [2024-12-06 20:59:41.021518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:24.012 [2024-12-06 20:59:41.021547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:24.012 [2024-12-06 20:59:41.021577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:24.012 [2024-12-06 20:59:41.021607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:24.012 [2024-12-06 20:59:41.021684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:24.012 [2024-12-06 20:59:41.021727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:24.013 [2024-12-06 20:59:41.021757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:24.013 [2024-12-06 20:59:41.021787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:24.013 [2024-12-06 20:59:41.021815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:24.013 [2024-12-06 20:59:41.021844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:24.013 [2024-12-06 20:59:41.021953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:24.013 [2024-12-06 20:59:41.021986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:24.013 [2024-12-06 20:59:41.022016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:24.013 [2024-12-06 20:59:41.022093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:24.013 [2024-12-06 20:59:41.022127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:24.013 [2024-12-06 20:59:41.022158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.013 [2024-12-06 20:59:41.022220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:24.013 [2024-12-06 20:59:41.022272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:24.013 [2024-12-06 20:59:41.022303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:24.013 [2024-12-06 20:59:41.022333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:24.013 [2024-12-06 20:59:41.022365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.022385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:24.013 [2024-12-06 20:59:41.022406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:32:24.013 [2024-12-06 20:59:41.022426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.050832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.050883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.013 [2024-12-06 20:59:41.050920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.126 ms 00:32:24.013 [2024-12-06 20:59:41.050928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.051021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.051031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:24.013 [2024-12-06 20:59:41.051044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:24.013 [2024-12-06 20:59:41.051052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.098288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.098346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.013 [2024-12-06 20:59:41.098360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.175 ms 00:32:24.013 [2024-12-06 20:59:41.098369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.098423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.098434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.013 [2024-12-06 20:59:41.098443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:24.013 [2024-12-06 20:59:41.098452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.098568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.098581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.013 [2024-12-06 20:59:41.098590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:24.013 [2024-12-06 20:59:41.098598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.098728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.098740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.013 [2024-12-06 20:59:41.098749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:32:24.013 [2024-12-06 20:59:41.098758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.114686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.114931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.013 [2024-12-06 20:59:41.114955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.907 ms 00:32:24.013 [2024-12-06 20:59:41.114964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.115140] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:24.013 [2024-12-06 20:59:41.115155] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:24.013 [2024-12-06 20:59:41.115169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.115178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:24.013 [2024-12-06 20:59:41.115186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:24.013 [2024-12-06 20:59:41.115194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.127513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.127564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:24.013 [2024-12-06 20:59:41.127576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.301 ms 00:32:24.013 [2024-12-06 20:59:41.127585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.127715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.127725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:24.013 [2024-12-06 20:59:41.127734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:24.013 [2024-12-06 20:59:41.127748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.127800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.127810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:24.013 [2024-12-06 20:59:41.127827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:24.013 [2024-12-06 20:59:41.127834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.128504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.128537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:24.013 [2024-12-06 20:59:41.128547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:32:24.013 [2024-12-06 20:59:41.128555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.013 [2024-12-06 20:59:41.128583] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:24.013 [2024-12-06 20:59:41.128593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.013 [2024-12-06 20:59:41.128601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:24.013 [2024-12-06 20:59:41.128610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:24.013 [2024-12-06 20:59:41.128619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.141414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:24.275 [2024-12-06 20:59:41.141604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.141615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:24.275 [2024-12-06 20:59:41.141627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.963 ms 00:32:24.275 [2024-12-06 20:59:41.141634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.143846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.143885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:24.275 [2024-12-06 20:59:41.143906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:32:24.275 [2024-12-06 20:59:41.143914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.144012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.144023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:24.275 [2024-12-06 20:59:41.144048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:24.275 [2024-12-06 20:59:41.144057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.144082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.144098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:24.275 [2024-12-06 20:59:41.144107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.275 [2024-12-06 20:59:41.144115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.144150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:24.275 [2024-12-06 20:59:41.144160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.144168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:24.275 [2024-12-06 20:59:41.144176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:24.275 [2024-12-06 20:59:41.144183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.171350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.171401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:24.275 [2024-12-06 20:59:41.171416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.147 ms 00:32:24.275 [2024-12-06 20:59:41.171424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.171509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.275 [2024-12-06 20:59:41.171520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:24.275 [2024-12-06 20:59:41.171530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:24.275 [2024-12-06 20:59:41.171539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.275 [2024-12-06 20:59:41.172930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.336 ms, result 0 00:32:25.659  [2024-12-06T20:59:43.363Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T20:59:44.739Z] Copying: 22/1024 [MB] (11 MBps) [2024-12-06T20:59:45.676Z] Copying: 38/1024 [MB] (16 MBps) [2024-12-06T20:59:46.617Z] Copying: 55/1024 [MB] (17 MBps) [2024-12-06T20:59:47.602Z] Copying: 78/1024 [MB] (22 MBps) [2024-12-06T20:59:48.546Z] Copying: 97/1024 [MB] (18 MBps) [2024-12-06T20:59:49.495Z] Copying: 112/1024 [MB] (15 MBps) [2024-12-06T20:59:50.442Z] Copying: 133/1024 [MB] (20 MBps) [2024-12-06T20:59:51.385Z] Copying: 147/1024 [MB] (14 MBps) [2024-12-06T20:59:52.769Z] Copying: 162/1024 [MB] (14 MBps) [2024-12-06T20:59:53.713Z] Copying: 172/1024 [MB] (10 MBps) [2024-12-06T20:59:54.659Z] Copying: 186/1024 [MB] (13 MBps) [2024-12-06T20:59:55.603Z] Copying: 198/1024 [MB] (12 MBps) [2024-12-06T20:59:56.546Z] Copying: 213/1024 [MB] (14 MBps) [2024-12-06T20:59:57.492Z] Copying: 230/1024 [MB] (17 MBps) [2024-12-06T20:59:58.438Z] Copying: 249/1024 [MB] (19 MBps) [2024-12-06T20:59:59.384Z] Copying: 270/1024 [MB] (20 MBps) [2024-12-06T21:00:00.774Z] Copying: 286/1024 [MB] (15 MBps) [2024-12-06T21:00:01.716Z] Copying: 302/1024 [MB] (16 MBps) [2024-12-06T21:00:02.659Z] Copying: 322/1024 [MB] (19 MBps) [2024-12-06T21:00:03.600Z] Copying: 335/1024 [MB] (12 MBps) [2024-12-06T21:00:04.541Z] Copying: 350/1024 [MB] (15 MBps) [2024-12-06T21:00:05.483Z] Copying: 362/1024 [MB] (11 MBps) [2024-12-06T21:00:06.424Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-06T21:00:07.368Z] Copying: 383/1024 [MB] (10 MBps) [2024-12-06T21:00:08.757Z] Copying: 394/1024 [MB] (10 MBps) [2024-12-06T21:00:09.703Z] Copying: 411/1024 [MB] (16 MBps) [2024-12-06T21:00:10.652Z] Copying: 433/1024 [MB] (22 MBps) [2024-12-06T21:00:11.599Z] Copying: 453/1024 [MB] (19 MBps) [2024-12-06T21:00:12.544Z] Copying: 469/1024 [MB] (16 MBps) [2024-12-06T21:00:13.539Z] Copying: 484/1024 [MB] (14 MBps) [2024-12-06T21:00:14.482Z] Copying: 495/1024 [MB] (11 MBps) [2024-12-06T21:00:15.426Z] Copying: 518/1024 [MB] (22 MBps) [2024-12-06T21:00:16.366Z] Copying: 536/1024 [MB] (18 MBps) [2024-12-06T21:00:17.751Z] Copying: 552/1024 [MB] (15 MBps) [2024-12-06T21:00:18.695Z] Copying: 568/1024 [MB] (16 MBps) [2024-12-06T21:00:19.638Z] Copying: 585/1024 [MB] (16 MBps) [2024-12-06T21:00:20.581Z] Copying: 602/1024 [MB] (17 MBps) [2024-12-06T21:00:21.524Z] Copying: 616/1024 [MB] (14 MBps) [2024-12-06T21:00:22.465Z] Copying: 634/1024 [MB] (17 MBps) [2024-12-06T21:00:23.410Z] Copying: 647/1024 [MB] (13 MBps) [2024-12-06T21:00:24.794Z] Copying: 670/1024 [MB] (22 MBps) [2024-12-06T21:00:25.389Z] Copying: 683/1024 [MB] (12 MBps) [2024-12-06T21:00:26.774Z] Copying: 697/1024 [MB] (14 MBps) [2024-12-06T21:00:27.716Z] Copying: 710/1024 [MB] (13 MBps) [2024-12-06T21:00:28.660Z] Copying: 722/1024 [MB] (12 MBps) [2024-12-06T21:00:29.603Z] Copying: 738/1024 [MB] (15 MBps) [2024-12-06T21:00:30.547Z] Copying: 754/1024 [MB] (16 MBps) [2024-12-06T21:00:31.489Z] Copying: 768/1024 [MB] (14 MBps) [2024-12-06T21:00:32.428Z] Copying: 787/1024 [MB] (18 MBps) [2024-12-06T21:00:33.374Z] Copying: 802/1024 [MB] (14 MBps) [2024-12-06T21:00:34.760Z] Copying: 818/1024 [MB] (16 MBps) [2024-12-06T21:00:35.697Z] Copying: 835/1024 [MB] (16 MBps) [2024-12-06T21:00:36.634Z] Copying: 850/1024 [MB] (15 MBps) [2024-12-06T21:00:37.580Z] Copying: 865/1024 [MB] (15 MBps) [2024-12-06T21:00:38.524Z] Copying: 888/1024 [MB] (22 MBps) [2024-12-06T21:00:39.467Z] Copying: 903/1024 [MB] (15 MBps) [2024-12-06T21:00:40.411Z] Copying: 921/1024 [MB] (17 MBps) [2024-12-06T21:00:41.794Z] Copying: 940/1024 [MB] (18 MBps) [2024-12-06T21:00:42.446Z] Copying: 959/1024 [MB] (19 MBps) [2024-12-06T21:00:43.388Z] Copying: 982/1024 [MB] (22 MBps) [2024-12-06T21:00:44.778Z] Copying: 1004/1024 [MB] (22 MBps) [2024-12-06T21:00:44.778Z] Copying: 1023/1024 [MB] (18 MBps) [2024-12-06T21:00:45.041Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 21:00:44.906632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.908 [2024-12-06 21:00:44.907182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:27.908 [2024-12-06 21:00:44.907395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:27.908 [2024-12-06 21:00:44.907469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.908 [2024-12-06 21:00:44.907593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:27.908 [2024-12-06 21:00:44.912447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.908 [2024-12-06 21:00:44.912648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:27.908 [2024-12-06 21:00:44.912980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:33:27.908 [2024-12-06 21:00:44.913007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.908 [2024-12-06 21:00:44.913328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.908 [2024-12-06 21:00:44.913343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:27.908 [2024-12-06 21:00:44.913354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:33:27.908 [2024-12-06 21:00:44.913365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.908 [2024-12-06 21:00:44.913404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.908 [2024-12-06 21:00:44.913417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:27.908 [2024-12-06 21:00:44.913429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:27.908 [2024-12-06 21:00:44.913439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.908 [2024-12-06 21:00:44.913504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.908 [2024-12-06 21:00:44.913517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:27.908 [2024-12-06 21:00:44.913527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:33:27.908 [2024-12-06 21:00:44.913538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.908 [2024-12-06 21:00:44.913556] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:27.908 [2024-12-06 21:00:44.913573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:27.908 [2024-12-06 21:00:44.913737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.913998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:27.909 [2024-12-06 21:00:44.914589] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:27.909 [2024-12-06 21:00:44.914599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 57031464-d590-49dc-928a-15f887881385 00:33:27.909 [2024-12-06 21:00:44.914610] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:27.909 [2024-12-06 21:00:44.914619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:27.909 [2024-12-06 21:00:44.914629] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:27.909 [2024-12-06 21:00:44.914639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:27.909 [2024-12-06 21:00:44.914648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:27.909 [2024-12-06 21:00:44.914659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:27.910 [2024-12-06 21:00:44.914670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:27.910 [2024-12-06 21:00:44.914679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:27.910 [2024-12-06 21:00:44.914688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:27.910 [2024-12-06 21:00:44.914697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.910 [2024-12-06 21:00:44.914707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:27.910 [2024-12-06 21:00:44.914718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:33:27.910 [2024-12-06 21:00:44.914731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.910 [2024-12-06 21:00:44.929739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.910 [2024-12-06 21:00:44.929932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:27.910 [2024-12-06 21:00:44.929992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.989 ms 00:33:27.910 [2024-12-06 21:00:44.930017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.910 [2024-12-06 21:00:44.930416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.910 [2024-12-06 21:00:44.930448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:27.910 [2024-12-06 21:00:44.930519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:33:27.910 [2024-12-06 21:00:44.930582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.910 [2024-12-06 21:00:44.967362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.910 [2024-12-06 21:00:44.967528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:27.910 [2024-12-06 21:00:44.967589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.910 [2024-12-06 21:00:44.967615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.910 [2024-12-06 21:00:44.967707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.910 [2024-12-06 21:00:44.967734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:27.910 [2024-12-06 21:00:44.967764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.910 [2024-12-06 21:00:44.967784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.910 [2024-12-06 21:00:44.967856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.910 [2024-12-06 21:00:44.967953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:27.910 [2024-12-06 21:00:44.967975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.910 [2024-12-06 21:00:44.967996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.910 [2024-12-06 21:00:44.968047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.910 [2024-12-06 21:00:44.968071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:27.910 [2024-12-06 21:00:44.968092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.910 [2024-12-06 21:00:44.968162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.054682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.054919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:28.171 [2024-12-06 21:00:45.054984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.055007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.125335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.125535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:28.171 [2024-12-06 21:00:45.125595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.125629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.125729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.125754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:28.171 [2024-12-06 21:00:45.125775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.125794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.125849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.125872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:28.171 [2024-12-06 21:00:45.126017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.126043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.126163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.126188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:28.171 [2024-12-06 21:00:45.126246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.126257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.126292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.126302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:28.171 [2024-12-06 21:00:45.126312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.126320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.126367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.126377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:28.171 [2024-12-06 21:00:45.126385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.126393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.126440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.171 [2024-12-06 21:00:45.126451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:28.171 [2024-12-06 21:00:45.126459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.171 [2024-12-06 21:00:45.126467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.171 [2024-12-06 21:00:45.126607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 219.969 ms, result 0 00:33:29.109 00:33:29.109 00:33:29.109 21:00:45 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:31.025 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:31.025 21:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:31.025 [2024-12-06 21:00:48.142945] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:33:31.025 [2024-12-06 21:00:48.143201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85094 ] 00:33:31.282 [2024-12-06 21:00:48.300210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.282 [2024-12-06 21:00:48.397179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.539 [2024-12-06 21:00:48.654844] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:31.539 [2024-12-06 21:00:48.655060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:31.797 [2024-12-06 21:00:48.812051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.812209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:31.797 [2024-12-06 21:00:48.812281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:31.797 [2024-12-06 21:00:48.812305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.812373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.812401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:31.797 [2024-12-06 21:00:48.812421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:31.797 [2024-12-06 21:00:48.812499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.812540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:31.797 [2024-12-06 21:00:48.813318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:31.797 [2024-12-06 21:00:48.813412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.813795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:31.797 [2024-12-06 21:00:48.813879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:33:31.797 [2024-12-06 21:00:48.813925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.814315] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:31.797 [2024-12-06 21:00:48.814420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.814451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:31.797 [2024-12-06 21:00:48.814492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:33:31.797 [2024-12-06 21:00:48.814513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.814568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.814591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:31.797 [2024-12-06 21:00:48.814610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:31.797 [2024-12-06 21:00:48.814658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.814939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.814951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:31.797 [2024-12-06 21:00:48.814960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:33:31.797 [2024-12-06 21:00:48.814967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.815028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.815037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:31.797 [2024-12-06 21:00:48.815045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:31.797 [2024-12-06 21:00:48.815052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.815074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.815081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:31.797 [2024-12-06 21:00:48.815092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:31.797 [2024-12-06 21:00:48.815099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.815116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:31.797 [2024-12-06 21:00:48.818632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.818659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:31.797 [2024-12-06 21:00:48.818667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:33:31.797 [2024-12-06 21:00:48.818674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.818711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.818720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:31.797 [2024-12-06 21:00:48.818727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:31.797 [2024-12-06 21:00:48.818738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.818786] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:31.797 [2024-12-06 21:00:48.818807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:31.797 [2024-12-06 21:00:48.818843] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:31.797 [2024-12-06 21:00:48.818857] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:31.797 [2024-12-06 21:00:48.818973] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:31.797 [2024-12-06 21:00:48.818985] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:31.797 [2024-12-06 21:00:48.818995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:31.797 [2024-12-06 21:00:48.819005] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819015] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819025] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:31.797 [2024-12-06 21:00:48.819032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:31.797 [2024-12-06 21:00:48.819039] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:31.797 [2024-12-06 21:00:48.819046] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:31.797 [2024-12-06 21:00:48.819053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.819060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:31.797 [2024-12-06 21:00:48.819067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:33:31.797 [2024-12-06 21:00:48.819078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.819160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.797 [2024-12-06 21:00:48.819168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:31.797 [2024-12-06 21:00:48.819175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:31.797 [2024-12-06 21:00:48.819184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.797 [2024-12-06 21:00:48.819284] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:31.797 [2024-12-06 21:00:48.819299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:31.797 [2024-12-06 21:00:48.819307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:31.797 [2024-12-06 21:00:48.819329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:31.797 [2024-12-06 21:00:48.819349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:31.797 [2024-12-06 21:00:48.819363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:31.797 [2024-12-06 21:00:48.819372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:31.797 [2024-12-06 21:00:48.819378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:31.797 [2024-12-06 21:00:48.819385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:31.797 [2024-12-06 21:00:48.819391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:31.797 [2024-12-06 21:00:48.819402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:31.797 [2024-12-06 21:00:48.819416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:31.797 [2024-12-06 21:00:48.819435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:31.797 [2024-12-06 21:00:48.819454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:31.797 [2024-12-06 21:00:48.819473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.797 [2024-12-06 21:00:48.819486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:31.797 [2024-12-06 21:00:48.819492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:31.797 [2024-12-06 21:00:48.819498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.798 [2024-12-06 21:00:48.819505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:31.798 [2024-12-06 21:00:48.819512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:31.798 [2024-12-06 21:00:48.819518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:31.798 [2024-12-06 21:00:48.819524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:31.798 [2024-12-06 21:00:48.819531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:31.798 [2024-12-06 21:00:48.819537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:31.798 [2024-12-06 21:00:48.819544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:31.798 [2024-12-06 21:00:48.819551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:31.798 [2024-12-06 21:00:48.819558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.798 [2024-12-06 21:00:48.819564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:31.798 [2024-12-06 21:00:48.819570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:31.798 [2024-12-06 21:00:48.819577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.798 [2024-12-06 21:00:48.819584] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:31.798 [2024-12-06 21:00:48.819591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:31.798 [2024-12-06 21:00:48.819598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:31.798 [2024-12-06 21:00:48.819605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.798 [2024-12-06 21:00:48.819614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:31.798 [2024-12-06 21:00:48.819621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:31.798 [2024-12-06 21:00:48.819627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:31.798 [2024-12-06 21:00:48.819634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:31.798 [2024-12-06 21:00:48.819640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:31.798 [2024-12-06 21:00:48.819646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:31.798 [2024-12-06 21:00:48.819654] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:31.798 [2024-12-06 21:00:48.819663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:31.798 [2024-12-06 21:00:48.819678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:31.798 [2024-12-06 21:00:48.819685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:31.798 [2024-12-06 21:00:48.819692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:31.798 [2024-12-06 21:00:48.819698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:31.798 [2024-12-06 21:00:48.819705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:31.798 [2024-12-06 21:00:48.819712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:31.798 [2024-12-06 21:00:48.819719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:31.798 [2024-12-06 21:00:48.819726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:31.798 [2024-12-06 21:00:48.819733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:31.798 [2024-12-06 21:00:48.819768] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:31.798 [2024-12-06 21:00:48.819775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:31.798 [2024-12-06 21:00:48.819791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:31.798 [2024-12-06 21:00:48.819798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:31.798 [2024-12-06 21:00:48.819805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:31.798 [2024-12-06 21:00:48.819813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.819820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:31.798 [2024-12-06 21:00:48.819827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:33:31.798 [2024-12-06 21:00:48.819834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.843344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.843375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:31.798 [2024-12-06 21:00:48.843385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.472 ms 00:33:31.798 [2024-12-06 21:00:48.843392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.843468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.843476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:31.798 [2024-12-06 21:00:48.843487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:31.798 [2024-12-06 21:00:48.843495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.888560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.888598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:31.798 [2024-12-06 21:00:48.888610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.020 ms 00:33:31.798 [2024-12-06 21:00:48.888618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.888659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.888669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:31.798 [2024-12-06 21:00:48.888677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:31.798 [2024-12-06 21:00:48.888684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.888774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.888784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:31.798 [2024-12-06 21:00:48.888792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:31.798 [2024-12-06 21:00:48.888800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.888929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.888940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:31.798 [2024-12-06 21:00:48.888948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:33:31.798 [2024-12-06 21:00:48.888955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.901993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.902022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:31.798 [2024-12-06 21:00:48.902032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.019 ms 00:33:31.798 [2024-12-06 21:00:48.902039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.902141] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:31.798 [2024-12-06 21:00:48.902154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:31.798 [2024-12-06 21:00:48.902163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.902172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:31.798 [2024-12-06 21:00:48.902180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:31.798 [2024-12-06 21:00:48.902186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.914419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.914446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:31.798 [2024-12-06 21:00:48.914457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:33:31.798 [2024-12-06 21:00:48.914464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.914571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.914580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:31.798 [2024-12-06 21:00:48.914587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:33:31.798 [2024-12-06 21:00:48.914598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.914655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.914664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:31.798 [2024-12-06 21:00:48.914678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:31.798 [2024-12-06 21:00:48.914685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.915270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.915282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:31.798 [2024-12-06 21:00:48.915290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:33:31.798 [2024-12-06 21:00:48.915297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.915315] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:31.798 [2024-12-06 21:00:48.915324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.915331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:31.798 [2024-12-06 21:00:48.915339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:31.798 [2024-12-06 21:00:48.915346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.798 [2024-12-06 21:00:48.926254] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:31.798 [2024-12-06 21:00:48.926488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.798 [2024-12-06 21:00:48.926502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:31.798 [2024-12-06 21:00:48.926512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.125 ms 00:33:31.798 [2024-12-06 21:00:48.926520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.055 [2024-12-06 21:00:48.928545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.055 [2024-12-06 21:00:48.928570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:32.055 [2024-12-06 21:00:48.928579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:33:32.055 [2024-12-06 21:00:48.928586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.055 [2024-12-06 21:00:48.928661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.055 [2024-12-06 21:00:48.928671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:32.055 [2024-12-06 21:00:48.928679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:32.055 [2024-12-06 21:00:48.928686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.056 [2024-12-06 21:00:48.928706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.056 [2024-12-06 21:00:48.928718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:32.056 [2024-12-06 21:00:48.928726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:32.056 [2024-12-06 21:00:48.928733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.056 [2024-12-06 21:00:48.928757] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:32.056 [2024-12-06 21:00:48.928767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.056 [2024-12-06 21:00:48.928774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:32.056 [2024-12-06 21:00:48.928781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:32.056 [2024-12-06 21:00:48.928787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.056 [2024-12-06 21:00:48.952980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.056 [2024-12-06 21:00:48.953102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:32.056 [2024-12-06 21:00:48.953118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.174 ms 00:33:32.056 [2024-12-06 21:00:48.953125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.056 [2024-12-06 21:00:48.953185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.056 [2024-12-06 21:00:48.953194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:32.056 [2024-12-06 21:00:48.953201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:32.056 [2024-12-06 21:00:48.953208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.056 [2024-12-06 21:00:48.954065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.611 ms, result 0 00:33:32.988  [2024-12-06T21:00:51.053Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T21:00:51.987Z] Copying: 62/1024 [MB] (42 MBps) [2024-12-06T21:00:53.365Z] Copying: 81/1024 [MB] (19 MBps) [2024-12-06T21:00:54.300Z] Copying: 132/1024 [MB] (50 MBps) [2024-12-06T21:00:55.238Z] Copying: 176/1024 [MB] (44 MBps) [2024-12-06T21:00:56.169Z] Copying: 198/1024 [MB] (21 MBps) [2024-12-06T21:00:57.101Z] Copying: 229/1024 [MB] (31 MBps) [2024-12-06T21:00:58.056Z] Copying: 269/1024 [MB] (40 MBps) [2024-12-06T21:00:58.990Z] Copying: 308/1024 [MB] (38 MBps) [2024-12-06T21:01:00.366Z] Copying: 337/1024 [MB] (29 MBps) [2024-12-06T21:01:01.306Z] Copying: 353/1024 [MB] (16 MBps) [2024-12-06T21:01:02.242Z] Copying: 380/1024 [MB] (26 MBps) [2024-12-06T21:01:03.177Z] Copying: 431/1024 [MB] (51 MBps) [2024-12-06T21:01:04.114Z] Copying: 465/1024 [MB] (34 MBps) [2024-12-06T21:01:05.057Z] Copying: 493/1024 [MB] (27 MBps) [2024-12-06T21:01:05.996Z] Copying: 517/1024 [MB] (24 MBps) [2024-12-06T21:01:07.368Z] Copying: 541/1024 [MB] (23 MBps) [2024-12-06T21:01:08.303Z] Copying: 570/1024 [MB] (28 MBps) [2024-12-06T21:01:09.236Z] Copying: 594/1024 [MB] (23 MBps) [2024-12-06T21:01:10.166Z] Copying: 616/1024 [MB] (22 MBps) [2024-12-06T21:01:11.096Z] Copying: 657/1024 [MB] (40 MBps) [2024-12-06T21:01:12.024Z] Copying: 681/1024 [MB] (24 MBps) [2024-12-06T21:01:13.394Z] Copying: 706/1024 [MB] (25 MBps) [2024-12-06T21:01:14.332Z] Copying: 743/1024 [MB] (36 MBps) [2024-12-06T21:01:15.319Z] Copying: 765/1024 [MB] (21 MBps) [2024-12-06T21:01:16.251Z] Copying: 785/1024 [MB] (20 MBps) [2024-12-06T21:01:17.182Z] Copying: 805/1024 [MB] (19 MBps) [2024-12-06T21:01:18.119Z] Copying: 827/1024 [MB] (22 MBps) [2024-12-06T21:01:19.056Z] Copying: 850/1024 [MB] (22 MBps) [2024-12-06T21:01:20.010Z] Copying: 866/1024 [MB] (16 MBps) [2024-12-06T21:01:21.386Z] Copying: 888/1024 [MB] (22 MBps) [2024-12-06T21:01:22.324Z] Copying: 908/1024 [MB] (19 MBps) [2024-12-06T21:01:23.264Z] Copying: 936/1024 [MB] (27 MBps) [2024-12-06T21:01:24.206Z] Copying: 989/1024 [MB] (53 MBps) [2024-12-06T21:01:24.466Z] Copying: 1023/1024 [MB] (33 MBps) [2024-12-06T21:01:24.466Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-12-06 21:01:24.429650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.334 [2024-12-06 21:01:24.429696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:07.334 [2024-12-06 21:01:24.429708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:07.334 [2024-12-06 21:01:24.429715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.334 [2024-12-06 21:01:24.431945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:07.334 [2024-12-06 21:01:24.435051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.334 [2024-12-06 21:01:24.435159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:07.334 [2024-12-06 21:01:24.435174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:34:07.334 [2024-12-06 21:01:24.435181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.334 [2024-12-06 21:01:24.442504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.334 [2024-12-06 21:01:24.442598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:07.334 [2024-12-06 21:01:24.442611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:34:07.334 [2024-12-06 21:01:24.442617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.334 [2024-12-06 21:01:24.442641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.334 [2024-12-06 21:01:24.442648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:07.334 [2024-12-06 21:01:24.442655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:07.334 [2024-12-06 21:01:24.442661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.334 [2024-12-06 21:01:24.442700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.334 [2024-12-06 21:01:24.442709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:07.334 [2024-12-06 21:01:24.442715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:07.334 [2024-12-06 21:01:24.442721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.334 [2024-12-06 21:01:24.442732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:07.334 [2024-12-06 21:01:24.442741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:34:07.334 [2024-12-06 21:01:24.442749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.442996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:07.334 [2024-12-06 21:01:24.443092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:07.335 [2024-12-06 21:01:24.443364] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:07.335 [2024-12-06 21:01:24.443370] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 57031464-d590-49dc-928a-15f887881385 00:34:07.335 [2024-12-06 21:01:24.443376] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:34:07.335 [2024-12-06 21:01:24.443382] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129312 00:34:07.335 [2024-12-06 21:01:24.443387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:34:07.335 [2024-12-06 21:01:24.443393] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:34:07.335 [2024-12-06 21:01:24.443400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:07.335 [2024-12-06 21:01:24.443407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:07.335 [2024-12-06 21:01:24.443413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:07.335 [2024-12-06 21:01:24.443418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:07.335 [2024-12-06 21:01:24.443423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:07.335 [2024-12-06 21:01:24.443428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.335 [2024-12-06 21:01:24.443434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:07.335 [2024-12-06 21:01:24.443440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:34:07.335 [2024-12-06 21:01:24.443446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.335 [2024-12-06 21:01:24.453174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.335 [2024-12-06 21:01:24.453264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:07.335 [2024-12-06 21:01:24.453279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.717 ms 00:34:07.335 [2024-12-06 21:01:24.453285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.335 [2024-12-06 21:01:24.453548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.335 [2024-12-06 21:01:24.453555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:07.335 [2024-12-06 21:01:24.453561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:34:07.335 [2024-12-06 21:01:24.453567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.594 [2024-12-06 21:01:24.479137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.594 [2024-12-06 21:01:24.479165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:07.594 [2024-12-06 21:01:24.479172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.594 [2024-12-06 21:01:24.479179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.594 [2024-12-06 21:01:24.479219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.594 [2024-12-06 21:01:24.479226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:07.594 [2024-12-06 21:01:24.479231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.594 [2024-12-06 21:01:24.479237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.594 [2024-12-06 21:01:24.479272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.594 [2024-12-06 21:01:24.479279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:07.594 [2024-12-06 21:01:24.479288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.594 [2024-12-06 21:01:24.479293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.594 [2024-12-06 21:01:24.479305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.594 [2024-12-06 21:01:24.479310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:07.595 [2024-12-06 21:01:24.479316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.479322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.537258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.537291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:07.595 [2024-12-06 21:01:24.537300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.537306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:07.595 [2024-12-06 21:01:24.585468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:07.595 [2024-12-06 21:01:24.585538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:07.595 [2024-12-06 21:01:24.585585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:07.595 [2024-12-06 21:01:24.585660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:07.595 [2024-12-06 21:01:24.585699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:07.595 [2024-12-06 21:01:24.585742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.595 [2024-12-06 21:01:24.585788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:07.595 [2024-12-06 21:01:24.585794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.595 [2024-12-06 21:01:24.585800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.595 [2024-12-06 21:01:24.585886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 157.994 ms, result 0 00:34:09.520 00:34:09.520 00:34:09.520 21:01:26 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:09.520 [2024-12-06 21:01:26.299218] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:34:09.520 [2024-12-06 21:01:26.299441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85474 ] 00:34:09.520 [2024-12-06 21:01:26.443299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.520 [2024-12-06 21:01:26.519997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.779 [2024-12-06 21:01:26.729624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:09.779 [2024-12-06 21:01:26.729672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:09.779 [2024-12-06 21:01:26.876693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.876730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:09.779 [2024-12-06 21:01:26.876740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:09.779 [2024-12-06 21:01:26.876746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.876779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.876788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:09.779 [2024-12-06 21:01:26.876795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:09.779 [2024-12-06 21:01:26.876800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.876813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:09.779 [2024-12-06 21:01:26.877604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:09.779 [2024-12-06 21:01:26.877632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.877640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:09.779 [2024-12-06 21:01:26.877648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:34:09.779 [2024-12-06 21:01:26.877654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.877886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:09.779 [2024-12-06 21:01:26.877915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.877923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:09.779 [2024-12-06 21:01:26.877930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:09.779 [2024-12-06 21:01:26.877936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.877968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.877974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:09.779 [2024-12-06 21:01:26.877980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:09.779 [2024-12-06 21:01:26.877986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.878183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.878191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:09.779 [2024-12-06 21:01:26.878198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:34:09.779 [2024-12-06 21:01:26.878203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.878251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.878257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:09.779 [2024-12-06 21:01:26.878263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:09.779 [2024-12-06 21:01:26.878268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.878284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.779 [2024-12-06 21:01:26.878290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:09.779 [2024-12-06 21:01:26.878298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:09.779 [2024-12-06 21:01:26.878303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.779 [2024-12-06 21:01:26.878317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:09.779 [2024-12-06 21:01:26.881147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.780 [2024-12-06 21:01:26.881171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:09.780 [2024-12-06 21:01:26.881178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:34:09.780 [2024-12-06 21:01:26.881183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.780 [2024-12-06 21:01:26.881212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.780 [2024-12-06 21:01:26.881218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:09.780 [2024-12-06 21:01:26.881225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:09.780 [2024-12-06 21:01:26.881230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.780 [2024-12-06 21:01:26.881264] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:09.780 [2024-12-06 21:01:26.881279] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:09.780 [2024-12-06 21:01:26.881306] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:09.780 [2024-12-06 21:01:26.881317] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:09.780 [2024-12-06 21:01:26.881395] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:09.780 [2024-12-06 21:01:26.881403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:09.780 [2024-12-06 21:01:26.881410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:09.780 [2024-12-06 21:01:26.881418] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881424] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881432] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:09.780 [2024-12-06 21:01:26.881437] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:09.780 [2024-12-06 21:01:26.881442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:09.780 [2024-12-06 21:01:26.881448] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:09.780 [2024-12-06 21:01:26.881454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.780 [2024-12-06 21:01:26.881459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:09.780 [2024-12-06 21:01:26.881465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:34:09.780 [2024-12-06 21:01:26.881470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.780 [2024-12-06 21:01:26.881532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.780 [2024-12-06 21:01:26.881538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:09.780 [2024-12-06 21:01:26.881544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:09.780 [2024-12-06 21:01:26.881550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.780 [2024-12-06 21:01:26.881624] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:09.780 [2024-12-06 21:01:26.881632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:09.780 [2024-12-06 21:01:26.881638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:09.780 [2024-12-06 21:01:26.881654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:09.780 [2024-12-06 21:01:26.881670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:09.780 [2024-12-06 21:01:26.881680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:09.780 [2024-12-06 21:01:26.881686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:09.780 [2024-12-06 21:01:26.881691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:09.780 [2024-12-06 21:01:26.881696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:09.780 [2024-12-06 21:01:26.881701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:09.780 [2024-12-06 21:01:26.881710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:09.780 [2024-12-06 21:01:26.881720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:09.780 [2024-12-06 21:01:26.881735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:09.780 [2024-12-06 21:01:26.881750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:09.780 [2024-12-06 21:01:26.881764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:09.780 [2024-12-06 21:01:26.881778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:09.780 [2024-12-06 21:01:26.881793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:09.780 [2024-12-06 21:01:26.881803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:09.780 [2024-12-06 21:01:26.881808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:09.780 [2024-12-06 21:01:26.881812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:09.780 [2024-12-06 21:01:26.881817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:09.780 [2024-12-06 21:01:26.881822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:09.780 [2024-12-06 21:01:26.881827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:09.780 [2024-12-06 21:01:26.881836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:09.780 [2024-12-06 21:01:26.881842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:09.780 [2024-12-06 21:01:26.881853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:09.780 [2024-12-06 21:01:26.881859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:09.780 [2024-12-06 21:01:26.881871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:09.780 [2024-12-06 21:01:26.881876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:09.780 [2024-12-06 21:01:26.881881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:09.780 [2024-12-06 21:01:26.881902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:09.780 [2024-12-06 21:01:26.881907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:09.780 [2024-12-06 21:01:26.881912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:09.780 [2024-12-06 21:01:26.881919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:09.780 [2024-12-06 21:01:26.881925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:09.780 [2024-12-06 21:01:26.881931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:09.780 [2024-12-06 21:01:26.881937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:09.780 [2024-12-06 21:01:26.881942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:09.780 [2024-12-06 21:01:26.881948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:09.780 [2024-12-06 21:01:26.881953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:09.780 [2024-12-06 21:01:26.881958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:09.780 [2024-12-06 21:01:26.881963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:09.780 [2024-12-06 21:01:26.881969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:09.781 [2024-12-06 21:01:26.881974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:09.781 [2024-12-06 21:01:26.881979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:09.781 [2024-12-06 21:01:26.881985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:09.781 [2024-12-06 21:01:26.881990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:09.781 [2024-12-06 21:01:26.881995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:09.781 [2024-12-06 21:01:26.882000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:09.781 [2024-12-06 21:01:26.882006] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:09.781 [2024-12-06 21:01:26.882012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:09.781 [2024-12-06 21:01:26.882019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:09.781 [2024-12-06 21:01:26.882024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:09.781 [2024-12-06 21:01:26.882030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:09.781 [2024-12-06 21:01:26.882035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:09.781 [2024-12-06 21:01:26.882042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.781 [2024-12-06 21:01:26.882048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:09.781 [2024-12-06 21:01:26.882054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:34:09.781 [2024-12-06 21:01:26.882059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.781 [2024-12-06 21:01:26.900599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.781 [2024-12-06 21:01:26.900688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:09.781 [2024-12-06 21:01:26.900727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.507 ms 00:34:09.781 [2024-12-06 21:01:26.900744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.781 [2024-12-06 21:01:26.900816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.781 [2024-12-06 21:01:26.900832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:09.781 [2024-12-06 21:01:26.900852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:34:09.781 [2024-12-06 21:01:26.900866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.939173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.939279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:10.041 [2024-12-06 21:01:26.939322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.248 ms 00:34:10.041 [2024-12-06 21:01:26.939340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.939382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.939401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:10.041 [2024-12-06 21:01:26.939417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:10.041 [2024-12-06 21:01:26.939431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.939510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.939531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:10.041 [2024-12-06 21:01:26.939581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:10.041 [2024-12-06 21:01:26.939598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.939700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.939747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:10.041 [2024-12-06 21:01:26.939765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:34:10.041 [2024-12-06 21:01:26.939779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.950237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.950323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:10.041 [2024-12-06 21:01:26.950361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.411 ms 00:34:10.041 [2024-12-06 21:01:26.950378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.950473] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:10.041 [2024-12-06 21:01:26.950650] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:10.041 [2024-12-06 21:01:26.950698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.950716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:10.041 [2024-12-06 21:01:26.950731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:34:10.041 [2024-12-06 21:01:26.950746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.959908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.959985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:10.041 [2024-12-06 21:01:26.960085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.141 ms 00:34:10.041 [2024-12-06 21:01:26.960102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.960197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.960215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:10.041 [2024-12-06 21:01:26.960302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:10.041 [2024-12-06 21:01:26.960330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.960394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.960413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:10.041 [2024-12-06 21:01:26.960428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:10.041 [2024-12-06 21:01:26.960470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.960929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.960995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:10.041 [2024-12-06 21:01:26.961034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:34:10.041 [2024-12-06 21:01:26.961050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.961077] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:10.041 [2024-12-06 21:01:26.961128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.961144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:10.041 [2024-12-06 21:01:26.961158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:34:10.041 [2024-12-06 21:01:26.961190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.969704] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:10.041 [2024-12-06 21:01:26.969861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.970102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:10.041 [2024-12-06 21:01:26.970164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.646 ms 00:34:10.041 [2024-12-06 21:01:26.970183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.971842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.971927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:10.041 [2024-12-06 21:01:26.971968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:34:10.041 [2024-12-06 21:01:26.971985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.972069] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:10.041 [2024-12-06 21:01:26.972431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.972446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:10.041 [2024-12-06 21:01:26.972453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:34:10.041 [2024-12-06 21:01:26.972459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.041 [2024-12-06 21:01:26.972492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.041 [2024-12-06 21:01:26.972499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:10.041 [2024-12-06 21:01:26.972505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:10.042 [2024-12-06 21:01:26.972511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.042 [2024-12-06 21:01:26.972535] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:10.042 [2024-12-06 21:01:26.972543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.042 [2024-12-06 21:01:26.972549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:10.042 [2024-12-06 21:01:26.972555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:10.042 [2024-12-06 21:01:26.972560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.042 [2024-12-06 21:01:26.990551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.042 [2024-12-06 21:01:26.990578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:10.042 [2024-12-06 21:01:26.990587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.978 ms 00:34:10.042 [2024-12-06 21:01:26.990593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.042 [2024-12-06 21:01:26.990642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.042 [2024-12-06 21:01:26.990649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:10.042 [2024-12-06 21:01:26.990655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:10.042 [2024-12-06 21:01:26.990661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.042 [2024-12-06 21:01:26.991377] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.375 ms, result 0 00:34:11.419  [2024-12-06T21:01:29.495Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-06T21:01:30.429Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-06T21:01:31.363Z] Copying: 52/1024 [MB] (14 MBps) [2024-12-06T21:01:32.298Z] Copying: 68/1024 [MB] (16 MBps) [2024-12-06T21:01:33.239Z] Copying: 93/1024 [MB] (25 MBps) [2024-12-06T21:01:34.222Z] Copying: 116/1024 [MB] (22 MBps) [2024-12-06T21:01:35.167Z] Copying: 127/1024 [MB] (10 MBps) [2024-12-06T21:01:36.545Z] Copying: 137/1024 [MB] (10 MBps) [2024-12-06T21:01:37.479Z] Copying: 148/1024 [MB] (11 MBps) [2024-12-06T21:01:38.413Z] Copying: 167/1024 [MB] (18 MBps) [2024-12-06T21:01:39.348Z] Copying: 196/1024 [MB] (29 MBps) [2024-12-06T21:01:40.288Z] Copying: 216/1024 [MB] (19 MBps) [2024-12-06T21:01:41.228Z] Copying: 237/1024 [MB] (21 MBps) [2024-12-06T21:01:42.163Z] Copying: 256/1024 [MB] (18 MBps) [2024-12-06T21:01:43.540Z] Copying: 271/1024 [MB] (14 MBps) [2024-12-06T21:01:44.475Z] Copying: 298/1024 [MB] (26 MBps) [2024-12-06T21:01:45.410Z] Copying: 327/1024 [MB] (29 MBps) [2024-12-06T21:01:46.350Z] Copying: 339/1024 [MB] (11 MBps) [2024-12-06T21:01:47.291Z] Copying: 351/1024 [MB] (11 MBps) [2024-12-06T21:01:48.230Z] Copying: 362/1024 [MB] (11 MBps) [2024-12-06T21:01:49.167Z] Copying: 375/1024 [MB] (12 MBps) [2024-12-06T21:01:50.541Z] Copying: 387/1024 [MB] (12 MBps) [2024-12-06T21:01:51.155Z] Copying: 399/1024 [MB] (12 MBps) [2024-12-06T21:01:52.528Z] Copying: 412/1024 [MB] (12 MBps) [2024-12-06T21:01:53.462Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-06T21:01:54.398Z] Copying: 441/1024 [MB] (17 MBps) [2024-12-06T21:01:55.338Z] Copying: 463/1024 [MB] (22 MBps) [2024-12-06T21:01:56.275Z] Copying: 475/1024 [MB] (11 MBps) [2024-12-06T21:01:57.214Z] Copying: 490/1024 [MB] (15 MBps) [2024-12-06T21:01:58.156Z] Copying: 503/1024 [MB] (12 MBps) [2024-12-06T21:01:59.536Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-06T21:02:00.472Z] Copying: 525/1024 [MB] (11 MBps) [2024-12-06T21:02:01.409Z] Copying: 545/1024 [MB] (19 MBps) [2024-12-06T21:02:02.343Z] Copying: 567/1024 [MB] (21 MBps) [2024-12-06T21:02:03.278Z] Copying: 585/1024 [MB] (18 MBps) [2024-12-06T21:02:04.218Z] Copying: 597/1024 [MB] (12 MBps) [2024-12-06T21:02:05.161Z] Copying: 609/1024 [MB] (12 MBps) [2024-12-06T21:02:06.542Z] Copying: 620/1024 [MB] (11 MBps) [2024-12-06T21:02:07.480Z] Copying: 632/1024 [MB] (11 MBps) [2024-12-06T21:02:08.443Z] Copying: 643/1024 [MB] (11 MBps) [2024-12-06T21:02:09.382Z] Copying: 654/1024 [MB] (11 MBps) [2024-12-06T21:02:10.325Z] Copying: 668/1024 [MB] (13 MBps) [2024-12-06T21:02:11.268Z] Copying: 679/1024 [MB] (11 MBps) [2024-12-06T21:02:12.209Z] Copying: 689/1024 [MB] (10 MBps) [2024-12-06T21:02:13.153Z] Copying: 699/1024 [MB] (10 MBps) [2024-12-06T21:02:14.541Z] Copying: 710/1024 [MB] (10 MBps) [2024-12-06T21:02:15.481Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-06T21:02:16.446Z] Copying: 730/1024 [MB] (10 MBps) [2024-12-06T21:02:17.388Z] Copying: 741/1024 [MB] (10 MBps) [2024-12-06T21:02:18.330Z] Copying: 751/1024 [MB] (10 MBps) [2024-12-06T21:02:19.275Z] Copying: 762/1024 [MB] (10 MBps) [2024-12-06T21:02:20.216Z] Copying: 772/1024 [MB] (10 MBps) [2024-12-06T21:02:21.155Z] Copying: 783/1024 [MB] (10 MBps) [2024-12-06T21:02:22.562Z] Copying: 793/1024 [MB] (10 MBps) [2024-12-06T21:02:23.136Z] Copying: 804/1024 [MB] (10 MBps) [2024-12-06T21:02:24.525Z] Copying: 814/1024 [MB] (10 MBps) [2024-12-06T21:02:25.468Z] Copying: 830/1024 [MB] (16 MBps) [2024-12-06T21:02:26.411Z] Copying: 850/1024 [MB] (19 MBps) [2024-12-06T21:02:27.357Z] Copying: 868/1024 [MB] (17 MBps) [2024-12-06T21:02:28.302Z] Copying: 884/1024 [MB] (16 MBps) [2024-12-06T21:02:29.248Z] Copying: 902/1024 [MB] (18 MBps) [2024-12-06T21:02:30.237Z] Copying: 920/1024 [MB] (17 MBps) [2024-12-06T21:02:31.242Z] Copying: 938/1024 [MB] (17 MBps) [2024-12-06T21:02:32.187Z] Copying: 954/1024 [MB] (16 MBps) [2024-12-06T21:02:33.571Z] Copying: 974/1024 [MB] (19 MBps) [2024-12-06T21:02:34.145Z] Copying: 991/1024 [MB] (17 MBps) [2024-12-06T21:02:35.535Z] Copying: 1006/1024 [MB] (14 MBps) [2024-12-06T21:02:35.535Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 21:02:35.299746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.300251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:18.402 [2024-12-06 21:02:35.300336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:18.402 [2024-12-06 21:02:35.300365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.402 [2024-12-06 21:02:35.300422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:18.402 [2024-12-06 21:02:35.303807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.303991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:18.402 [2024-12-06 21:02:35.304277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:35:18.402 [2024-12-06 21:02:35.304317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.402 [2024-12-06 21:02:35.304603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.304742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:18.402 [2024-12-06 21:02:35.304759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:35:18.402 [2024-12-06 21:02:35.304770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.402 [2024-12-06 21:02:35.304810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.304822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:18.402 [2024-12-06 21:02:35.304834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:18.402 [2024-12-06 21:02:35.304844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.402 [2024-12-06 21:02:35.304930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.304948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:18.402 [2024-12-06 21:02:35.304959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:35:18.402 [2024-12-06 21:02:35.304969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.402 [2024-12-06 21:02:35.304987] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:18.402 [2024-12-06 21:02:35.305002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:18.402 [2024-12-06 21:02:35.305015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.305994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.306004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:18.402 [2024-12-06 21:02:35.306024] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:18.402 [2024-12-06 21:02:35.306034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 57031464-d590-49dc-928a-15f887881385 00:35:18.402 [2024-12-06 21:02:35.306045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:18.402 [2024-12-06 21:02:35.306056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1824 00:35:18.402 [2024-12-06 21:02:35.306066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1792 00:35:18.402 [2024-12-06 21:02:35.306080] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0179 00:35:18.402 [2024-12-06 21:02:35.306089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:18.402 [2024-12-06 21:02:35.306100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:18.402 [2024-12-06 21:02:35.306109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:18.402 [2024-12-06 21:02:35.306117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:18.402 [2024-12-06 21:02:35.306127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:18.402 [2024-12-06 21:02:35.306137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.306147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:18.402 [2024-12-06 21:02:35.306158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:35:18.402 [2024-12-06 21:02:35.306168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.402 [2024-12-06 21:02:35.321912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.402 [2024-12-06 21:02:35.322069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:18.403 [2024-12-06 21:02:35.322134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.724 ms 00:35:18.403 [2024-12-06 21:02:35.322156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.322553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.403 [2024-12-06 21:02:35.322591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:18.403 [2024-12-06 21:02:35.323462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:35:18.403 [2024-12-06 21:02:35.323515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.359919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.360101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:18.403 [2024-12-06 21:02:35.360166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.360191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.360279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.360303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:18.403 [2024-12-06 21:02:35.360325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.360343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.360419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.360449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:18.403 [2024-12-06 21:02:35.360471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.360540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.360575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.360595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:18.403 [2024-12-06 21:02:35.360616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.360634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.445755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.445946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:18.403 [2024-12-06 21:02:35.446006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.446029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.514121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.514312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:18.403 [2024-12-06 21:02:35.514373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.514397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.514503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.514529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:18.403 [2024-12-06 21:02:35.514554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.514573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.514623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.514687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:18.403 [2024-12-06 21:02:35.514700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.514708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.514797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.514806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:18.403 [2024-12-06 21:02:35.514816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.514828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.514856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.514865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:18.403 [2024-12-06 21:02:35.514873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.514882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.514963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.514974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:18.403 [2024-12-06 21:02:35.514983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.514995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.515045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.403 [2024-12-06 21:02:35.515055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:18.403 [2024-12-06 21:02:35.515064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.403 [2024-12-06 21:02:35.515072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.403 [2024-12-06 21:02:35.515210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.431 ms, result 0 00:35:19.343 00:35:19.343 00:35:19.343 21:02:36 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:21.891 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 83523 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83523 ']' 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83523 00:35:21.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83523) - No such process 00:35:21.891 Process with pid 83523 is not found 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 83523 is not found' 00:35:21.891 Remove shared memory files 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_band_md /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_l2p_l1 /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_l2p_l2 /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_l2p_l2_ctx /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_nvc_md /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_p2l_pool /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_sb /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_sb_shm /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_trim_bitmap /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_trim_log /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_trim_md /dev/hugepages/ftl_57031464-d590-49dc-928a-15f887881385_vmap 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:21.891 21:02:38 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:21.891 00:35:21.892 real 4m24.556s 00:35:21.892 user 4m12.780s 00:35:21.892 sys 0m11.582s 00:35:21.892 21:02:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.892 ************************************ 00:35:21.892 END TEST ftl_restore_fast 00:35:21.892 ************************************ 00:35:21.892 21:02:38 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:21.892 21:02:38 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:21.892 21:02:38 ftl -- ftl/ftl.sh@14 -- # killprocess 74920 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 74920 ']' 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@958 -- # kill -0 74920 00:35:21.892 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74920) - No such process 00:35:21.892 Process with pid 74920 is not found 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74920 is not found' 00:35:21.892 21:02:38 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:21.892 21:02:38 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86219 00:35:21.892 21:02:38 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86219 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 86219 ']' 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.892 21:02:38 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.892 21:02:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:21.892 [2024-12-06 21:02:38.811505] Starting SPDK v25.01-pre git sha1 0354bb8e8 / DPDK 24.03.0 initialization... 00:35:21.892 [2024-12-06 21:02:38.811657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86219 ] 00:35:21.892 [2024-12-06 21:02:38.978393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.153 [2024-12-06 21:02:39.095571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.724 21:02:39 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.724 21:02:39 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:22.724 21:02:39 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:22.986 nvme0n1 00:35:22.986 21:02:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:22.986 21:02:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:22.986 21:02:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:23.248 21:02:40 ftl -- ftl/common.sh@28 -- # stores=2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2 00:35:23.248 21:02:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:23.248 21:02:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2d1d5b3a-37a2-4d7d-b6d8-6cea55fb50f2 00:35:23.510 21:02:40 ftl -- ftl/ftl.sh@23 -- # killprocess 86219 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 86219 ']' 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@958 -- # kill -0 86219 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@959 -- # uname 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86219 00:35:23.510 killing process with pid 86219 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86219' 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@973 -- # kill 86219 00:35:23.510 21:02:40 ftl -- common/autotest_common.sh@978 -- # wait 86219 00:35:25.429 21:02:42 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:25.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:25.429 Waiting for block devices as requested 00:35:25.429 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:25.429 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:25.429 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:25.429 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:30.726 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:30.726 21:02:47 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:30.726 Remove shared memory files 00:35:30.726 21:02:47 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:30.726 21:02:47 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:30.726 21:02:47 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:30.726 21:02:47 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:30.726 21:02:47 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:30.726 21:02:47 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:30.726 00:35:30.726 real 17m4.775s 00:35:30.726 user 19m11.746s 00:35:30.726 sys 1m15.205s 00:35:30.726 21:02:47 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.726 ************************************ 00:35:30.726 END TEST ftl 00:35:30.726 ************************************ 00:35:30.726 21:02:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:30.726 21:02:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:30.726 21:02:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:30.726 21:02:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:30.726 21:02:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:30.726 21:02:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:30.726 21:02:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:30.726 21:02:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:30.726 21:02:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:30.726 21:02:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:30.726 21:02:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:30.726 21:02:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.726 21:02:47 -- common/autotest_common.sh@10 -- # set +x 00:35:30.726 21:02:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:30.726 21:02:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:30.726 21:02:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:30.726 21:02:47 -- common/autotest_common.sh@10 -- # set +x 00:35:32.114 INFO: APP EXITING 00:35:32.114 INFO: killing all VMs 00:35:32.114 INFO: killing vhost app 00:35:32.114 INFO: EXIT DONE 00:35:32.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:32.631 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:32.631 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:32.631 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:32.631 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:33.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:33.466 Cleaning 00:35:33.466 Removing: /var/run/dpdk/spdk0/config 00:35:33.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.466 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.466 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.466 Removing: /var/run/dpdk/spdk0 00:35:33.466 Removing: /var/run/dpdk/spdk_pid56919 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57110 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57325 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57421 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57455 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57577 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57590 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57784 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57877 00:35:33.466 Removing: /var/run/dpdk/spdk_pid57973 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58078 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58175 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58215 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58246 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58322 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58406 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58842 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58906 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58958 00:35:33.466 Removing: /var/run/dpdk/spdk_pid58974 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59076 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59092 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59183 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59199 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59252 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59270 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59323 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59341 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59501 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59538 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59621 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59788 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59872 00:35:33.467 Removing: /var/run/dpdk/spdk_pid59914 00:35:33.467 Removing: /var/run/dpdk/spdk_pid60338 00:35:33.467 Removing: /var/run/dpdk/spdk_pid60436 00:35:33.467 Removing: /var/run/dpdk/spdk_pid60558 00:35:33.467 Removing: /var/run/dpdk/spdk_pid60611 00:35:33.467 Removing: /var/run/dpdk/spdk_pid60631 00:35:33.467 Removing: /var/run/dpdk/spdk_pid60715 00:35:33.467 Removing: /var/run/dpdk/spdk_pid61331 00:35:33.467 Removing: /var/run/dpdk/spdk_pid61373 00:35:33.467 Removing: /var/run/dpdk/spdk_pid61840 00:35:33.467 Removing: /var/run/dpdk/spdk_pid61938 00:35:33.467 Removing: /var/run/dpdk/spdk_pid62047 00:35:33.467 Removing: /var/run/dpdk/spdk_pid62099 00:35:33.467 Removing: /var/run/dpdk/spdk_pid62120 00:35:33.467 Removing: /var/run/dpdk/spdk_pid62151 00:35:33.467 Removing: /var/run/dpdk/spdk_pid63985 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64122 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64126 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64138 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64185 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64189 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64201 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64246 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64250 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64262 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64307 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64311 00:35:33.467 Removing: /var/run/dpdk/spdk_pid64323 00:35:33.467 Removing: /var/run/dpdk/spdk_pid65704 00:35:33.467 Removing: /var/run/dpdk/spdk_pid65801 00:35:33.467 Removing: /var/run/dpdk/spdk_pid67210 00:35:33.467 Removing: /var/run/dpdk/spdk_pid68950 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69018 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69088 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69199 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69290 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69386 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69460 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69535 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69645 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69731 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69827 00:35:33.467 Removing: /var/run/dpdk/spdk_pid69901 00:35:33.724 Removing: /var/run/dpdk/spdk_pid69973 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70077 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70170 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70264 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70333 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70408 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70512 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70604 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70700 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70768 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70838 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70913 00:35:33.724 Removing: /var/run/dpdk/spdk_pid70987 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71095 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71181 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71276 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71339 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71414 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71493 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71562 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71665 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71756 00:35:33.724 Removing: /var/run/dpdk/spdk_pid71905 00:35:33.724 Removing: /var/run/dpdk/spdk_pid72184 00:35:33.724 Removing: /var/run/dpdk/spdk_pid72220 00:35:33.724 Removing: /var/run/dpdk/spdk_pid72680 00:35:33.724 Removing: /var/run/dpdk/spdk_pid72868 00:35:33.724 Removing: /var/run/dpdk/spdk_pid72961 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73072 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73125 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73145 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73457 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73512 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73582 00:35:33.724 Removing: /var/run/dpdk/spdk_pid73979 00:35:33.724 Removing: /var/run/dpdk/spdk_pid74125 00:35:33.724 Removing: /var/run/dpdk/spdk_pid74920 00:35:33.724 Removing: /var/run/dpdk/spdk_pid75058 00:35:33.724 Removing: /var/run/dpdk/spdk_pid75229 00:35:33.724 Removing: /var/run/dpdk/spdk_pid75344 00:35:33.724 Removing: /var/run/dpdk/spdk_pid75669 00:35:33.724 Removing: /var/run/dpdk/spdk_pid75945 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76289 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76472 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76634 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76688 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76842 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76867 00:35:33.724 Removing: /var/run/dpdk/spdk_pid76922 00:35:33.724 Removing: /var/run/dpdk/spdk_pid77210 00:35:33.724 Removing: /var/run/dpdk/spdk_pid77425 00:35:33.724 Removing: /var/run/dpdk/spdk_pid77708 00:35:33.724 Removing: /var/run/dpdk/spdk_pid78330 00:35:33.724 Removing: /var/run/dpdk/spdk_pid79009 00:35:33.724 Removing: /var/run/dpdk/spdk_pid79884 00:35:33.724 Removing: /var/run/dpdk/spdk_pid80037 00:35:33.724 Removing: /var/run/dpdk/spdk_pid80124 00:35:33.724 Removing: /var/run/dpdk/spdk_pid80471 00:35:33.724 Removing: /var/run/dpdk/spdk_pid80530 00:35:33.724 Removing: /var/run/dpdk/spdk_pid81253 00:35:33.724 Removing: /var/run/dpdk/spdk_pid81711 00:35:33.724 Removing: /var/run/dpdk/spdk_pid82516 00:35:33.724 Removing: /var/run/dpdk/spdk_pid82631 00:35:33.724 Removing: /var/run/dpdk/spdk_pid82673 00:35:33.724 Removing: /var/run/dpdk/spdk_pid82738 00:35:33.724 Removing: /var/run/dpdk/spdk_pid82792 00:35:33.724 Removing: /var/run/dpdk/spdk_pid82845 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83057 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83136 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83204 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83282 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83317 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83378 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83523 00:35:33.724 Removing: /var/run/dpdk/spdk_pid83752 00:35:33.724 Removing: /var/run/dpdk/spdk_pid84423 00:35:33.724 Removing: /var/run/dpdk/spdk_pid85094 00:35:33.724 Removing: /var/run/dpdk/spdk_pid85474 00:35:33.724 Removing: /var/run/dpdk/spdk_pid86219 00:35:33.724 Clean 00:35:33.724 21:02:50 -- common/autotest_common.sh@1453 -- # return 0 00:35:33.724 21:02:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:33.724 21:02:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.724 21:02:50 -- common/autotest_common.sh@10 -- # set +x 00:35:33.982 21:02:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:33.982 21:02:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.982 21:02:50 -- common/autotest_common.sh@10 -- # set +x 00:35:33.982 21:02:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:33.982 21:02:50 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:33.982 21:02:50 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:33.982 21:02:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:33.982 21:02:50 -- spdk/autotest.sh@398 -- # hostname 00:35:33.982 21:02:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:33.982 geninfo: WARNING: invalid characters removed from testname! 00:36:00.613 21:03:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:02.531 21:03:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:05.076 21:03:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:07.619 21:03:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:10.163 21:03:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:11.548 21:03:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:14.095 21:03:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:14.095 21:03:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:14.095 21:03:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:14.095 21:03:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:14.095 21:03:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:14.095 21:03:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:14.095 + [[ -n 5026 ]] 00:36:14.095 + sudo kill 5026 00:36:14.106 [Pipeline] } 00:36:14.124 [Pipeline] // timeout 00:36:14.131 [Pipeline] } 00:36:14.144 [Pipeline] // stage 00:36:14.149 [Pipeline] } 00:36:14.161 [Pipeline] // catchError 00:36:14.180 [Pipeline] stage 00:36:14.182 [Pipeline] { (Stop VM) 00:36:14.193 [Pipeline] sh 00:36:14.475 + vagrant halt 00:36:17.021 ==> default: Halting domain... 00:36:20.330 [Pipeline] sh 00:36:20.612 + vagrant destroy -f 00:36:23.159 ==> default: Removing domain... 00:36:23.746 [Pipeline] sh 00:36:24.049 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:24.070 [Pipeline] } 00:36:24.086 [Pipeline] // stage 00:36:24.093 [Pipeline] } 00:36:24.109 [Pipeline] // dir 00:36:24.114 [Pipeline] } 00:36:24.128 [Pipeline] // wrap 00:36:24.135 [Pipeline] } 00:36:24.148 [Pipeline] // catchError 00:36:24.158 [Pipeline] stage 00:36:24.160 [Pipeline] { (Epilogue) 00:36:24.173 [Pipeline] sh 00:36:24.460 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:29.749 [Pipeline] catchError 00:36:29.751 [Pipeline] { 00:36:29.761 [Pipeline] sh 00:36:30.044 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:30.044 Artifacts sizes are good 00:36:30.055 [Pipeline] } 00:36:30.067 [Pipeline] // catchError 00:36:30.076 [Pipeline] archiveArtifacts 00:36:30.083 Archiving artifacts 00:36:30.176 [Pipeline] cleanWs 00:36:30.186 [WS-CLEANUP] Deleting project workspace... 00:36:30.186 [WS-CLEANUP] Deferred wipeout is used... 00:36:30.193 [WS-CLEANUP] done 00:36:30.195 [Pipeline] } 00:36:30.204 [Pipeline] // stage 00:36:30.208 [Pipeline] } 00:36:30.216 [Pipeline] // node 00:36:30.220 [Pipeline] End of Pipeline 00:36:30.245 Finished: SUCCESS